Register AI Use Case
Register a new AI use case with automatic domain-specific risk analysis and contract generation. The system analyzes your use case details and generates appropriate failure modes, guardrails, and safety suites.
POST
/api/usecases/register
JavaScript
cURL
Python
// Register a new AI use case
const API_BASE = 'https://ethicalzen-backend-400782183161.us-central1.run.app';
const response = await fetch(`${API_BASE}/api/usecases/register`, {
method: 'POST',
headers: {
'Authorization': 'Bearer YOUR_JWT_TOKEN', // Or use 'x-api-key': 'YOUR_API_KEY'
'Content-Type': 'application/json'
},
body: JSON.stringify({
service_name: 'Medical Diagnosis API',
use_case: 'AI-powered medical diagnosis for patient care',
industry: 'healthcare',
data_sensitivity: ['PHI', 'medical_records'],
compliance_requirements: ['HIPAA', 'SOC2']
})
});
const data = await response.json();
console.log(data.contract_id);
Responses
200 - Success
400 - Bad Request
401 - Unauthorized
422 - Validation Error
{
"success": true,
"contract_id": "medical-diagnosis-api/healthcare/us/v1.0",
"contract_spec": {
"id": "medical-diagnosis-api/healthcare/us/v1.0",
"version": "1.0",
"service_name": "Medical Diagnosis API",
"suite": "S2",
"risk_score": 95,
"failure_modes": [
{
"id": "FM_MISDIAGNOSIS",
"name": "Misdiagnosis Risk",
"severity": "CRITICAL",
"likelihood": "HIGH",
"impact": "Patient harm, malpractice liability",
"mitigation": "Require medical professional review"
},
{
"id": "FM_PHI_LEAK",
"name": "PHI Data Exposure",
"severity": "CRITICAL",
"likelihood": "MEDIUM",
"impact": "HIPAA violation, fines up to $50K per incident"
}
],
"guardrails": [
{
"id": "pii_detection_v1",
"name": "PII Detection",
"type": "REGEX",
"thresholds": {
"pii_risk": { "min": 0, "max": 0.05 }
}
},
{
"id": "medical_accuracy_v1",
"name": "Medical Accuracy Check",
"type": "SLM",
"thresholds": {
"confidence": { "min": 0.95 }
}
}
]
},
"analysis": {
"risk_assessment": {
"overall_risk_score": 95,
"risk_factors": [
"Healthcare industry with PHI data",
"Medical diagnosis use case",
"Patient safety implications"
]
}
},
"timestamp": "2025-11-07T12:45:00.000Z"
}
Request Body
service_name
string
Required
The name of your AI service. Should be descriptive and unique within your organization.
Example: "Medical Diagnosis API"
use_case
string
Required
Detailed description of what your AI service does and how it will be used.
Example: "AI-powered medical diagnosis for patient care"
industry
enum<string>
Required
The industry your service operates in. This affects risk scoring and compliance requirements.
Options: "healthcare", "finance", "education", "legal", "entertainment", "technology"
data_sensitivity
array<string>
Types of sensitive data your service handles. Determines which guardrails are applied.
Options: "PHI", "PII", "financial_data", "medical_records", "student_records", "legal_documents"
compliance_requirements
array<string>
Regulatory compliance frameworks your service must adhere to.
Options: "HIPAA", "GDPR", "SOC2", "FERPA", "PCI-DSS", "ISO27001"
Response Fields
contract_id
string
Unique identifier for the generated contract. Format: service-name/industry/region/version
Example: "medical-diagnosis-api/healthcare/us/v1.0"
suite
enum<string>
Safety suite classification based on risk analysis.
Options: "S0" (Low Risk), "S1" (Medium Risk), "S2" (High Risk), "S3" (Critical Risk)
risk_score
integer
Overall risk score from 0-100. Higher scores indicate higher risk and more stringent requirements.
Range: 0-100 (0-30: Low, 31-60: Medium, 61-85: High, 86-100: Critical)
failure_modes
array<object>
Identified potential failure scenarios specific to your service's domain and use case.
guardrails
array<object>
Recommended guardrails with enforcement mechanisms (REGEX, SLM, or LLM-based).