Validates clinical AI outputs for safety, bias, and hallucination risks before delivery to end-users or clinicians.
The AI Safety Auditor is a critical "human-in-the-loop" simulator and automated guardrail system. It intercepts outputs from other clinical agents to ensure they meet medical safety standards, do not contain Protected Health Information (PHI) where inappropriate, and are free from harmful hallucinations.
User: "Audit this generated discharge summary for safety."
Agent Action:
python3 Skills/Clinical/Safety/AI_Safety_Auditor/audit_output.py --input discharge_summary.txt --checks "all"