Understand what generative AI is, use it safely in health contexts, recognize hallucinations, and protect patient privacy
Generative AI (ChatGPT, Claude, Gemini, and others) is transforming how health professionals access information, draft documents, and support clinical reasoning. Yet most health education programs offer zero training on these tools. This creates two risks: (1) professionals who reject AI entirely and fall behind, and (2) professionals who use AI blindly without understanding its limitations. This skill teaches the responsible middle path — using Gen AI as a powerful assistant while understanding exactly where it fails.
After completing this skill, you will be able to:
Inspired by the University of Helsinki's "Elements of AI" (2M+ enrollments, 26 languages), this skill is the "Elements of Health AI" equivalent — the first exposure to AI for health professionals anywhere in the world.
This skill is for anyone in health — doctors, nurses, pharmacists, administrators, community health workers, students. No technical background required. You need only a smartphone or computer with internet access.
Open a free Gen AI tool: ChatGPT (chat.openai.com), Claude (claude.ai), or Gemini (gemini.google.com)
Complete 3 tasks using the AI tool:
Task A — Information synthesis:
"Summarize the 5 most common causes of maternal mortality in Sub-Saharan Africa, with approximate percentages, and cite sources."
Task B — Clinical reasoning support:
"A 45-year-old male presents with sudden onset headache, neck stiffness, and photophobia. What is the differential diagnosis and what initial investigations should be ordered?"
Task C — Document drafting:
"Draft a 200-word patient information leaflet about managing Type 2 diabetes through diet, appropriate for a patient with primary school education in a rural African setting."
For each response, note:
What is a hallucination? When AI generates plausible-sounding but factually wrong information. In health, hallucinations can be dangerous.
Find 3 hallucinations. Ask the AI tool:
"What clinical trials have been conducted on [specific drug] for [specific condition] in [specific African country]?"
Choose a narrow, specific query. The AI will likely generate trial names, dates, and results that do not exist. Try to verify each claim.
Types of health AI hallucinations:
Document your 3 hallucinations with:
Apply this framework to EVERY AI output in a health context:
| Step | Action | Question to Ask |
|---|---|---|
| V — Validate source | Check if cited papers/guidelines exist | Does this reference actually exist? |
| E — Examine specifics | Verify numbers, dosages, percentages | Are these numbers correct and current? |
| R — Review reasoning | Check if the logic chain is sound | Does the conclusion follow from the evidence? |
| I — Identify bias | Consider training data limitations | Is this answer biased toward high-income country settings? |
| F — Find alternatives | Compare with at least one other source | Does UpToDate / WHO / local guidelines agree? |
| Y — Your judgment | Apply clinical/professional judgment | Does this match my experience and context? |
Exercise: Apply the VERIFY framework to the 3 AI responses from Step 2. Fill in the table for each.
The cardinal rule: NEVER enter patient-identifiable data into public AI tools (ChatGPT, Claude, Gemini). This includes:
Why? Public AI tools may:
Safe alternatives:
Write a 1-page privacy policy for AI use in your institution or practice. Include:
The spectrum:
AI as search engine ──── AI as assistant ──── AI as advisor ──── AI as decision-maker
(acceptable) (acceptable) (with caution) (NOT acceptable in health)
Write a reflection (300 words): "How could AI help in my work, and where should it never replace human judgment?" Include at least:
You must produce all 5 artifacts to complete this skill:
| Criterion | Excellent (3) | Adequate (2) | Needs Improvement (1) |
|---|---|---|---|
| LLM Understanding | Accurate, jargon-free, captures key concepts (prediction, not understanding) | Mostly accurate, some confusion | Fundamental misconceptions |
| AI Interaction | All 3 tasks completed with thoughtful ratings and specific observations | Tasks completed but observations shallow | Missing tasks or no critical observation |
| Hallucination Detection | 3 hallucinations documented with specific evidence of incorrectness and correct answers | 2 hallucinations found, some evidence | Fewer than 2, or evidence missing |
| VERIFY Application | All 6 steps applied to all 3 responses with specific findings | Framework applied but some steps skipped | Incomplete or superficial application |
| Privacy + Reflection | Comprehensive policy with specific rules; reflection shows nuanced thinking about AI limits | Basic policy; reflection present | Policy missing key elements; reflection superficial |
Passing score: 10/15 (at least "Adequate" on all criteria)