Structures radiology peer review and quality assurance with discrepancy classification. Use when conducting peer review, classifying discrepancies, or documenting QA findings.
Structures radiology peer review and quality assurance with discrepancy classification.
Radiology peer review is mandated by The Joint Commission (OPPE/FPPE requirements), CMS Conditions of Participation, and ACR accreditation standards. Peer review serves as the primary mechanism for identifying diagnostic errors, measuring individual and departmental performance, and driving continuous quality improvement. The ACR Practice Parameter for Peer Review recommends that every radiologist have a representative sample of cases reviewed — the RADPEER system, developed by the ACR, provides a standardized scoring framework used by over 1,000 radiology practices nationwide.
Effective peer review requires structured discrepancy classification, non-punitive culture, actionable feedback, and integration with departmental quality metrics. Without a systematic approach, peer review devolves into either rubber-stamping (no errors found) or punitive witch-hunts (discouraging participation). The goal is to identify patterns — individual learning opportunities, systemic workflow issues, and equipment problems — that can be addressed to improve patient care. This skill provides the framework for conducting defensible, productive, and accreditation-compliant peer review.
| Score | Definition | Description |
|---|---|---|
| 1 | Agree with interpretation | No discrepancy identified |
| 2a | Discrepancy — NOT clinically significant | Finding was missed or misinterpreted but would not have changed management |
| 2b | Discrepancy — diagnosis should have been made; clinically significant | The missed/incorrect finding would have changed management at the time of original interpretation |
| 3a | Misinterpretation — NOT clinically significant | Major diagnostic error that did not impact clinical outcome |
| 3b | Misinterpretation — diagnosis should have been made; clinically significant | Major diagnostic error that impacted or could have impacted clinical outcome |
| 4 | Non-gradable | Poor image quality, incomplete study, or inadequate clinical information prevents meaningful review |
Was a finding missed or misinterpreted?
├── No → Score 1 (Agree)
└── Yes → Was it a significant discrepancy?
├── Discrepancy (understandable miss) →
│ ├── Clinically significant? → 2b
│ └── Not clinically significant? → 2a
└── Misinterpretation (should have been caught) →
├── Clinically significant? → 3b
└── Not clinically significant? → 3a
| Error Type | Definition | Examples |
|---|---|---|
| Perceptual | Finding visible in retrospect but not detected | Missed lung nodule, overlooked fracture |
| Cognitive | Finding detected but incorrectly interpreted | Mass called benign that was malignant; wrong classification |
| Communication | Finding correctly identified but inadequately communicated | Critical result not communicated; vague impression language |
| System | Error caused by workflow, technology, or process failure | Wrong patient images, prior studies unavailable, workstation display issue |
| Satisfaction of search | Stopped looking after finding one abnormality | Missed second finding after identifying the first |
| Factor Category | Examples |
|---|---|
| Workload | High volume at time of interpretation; fatigue |
| Image quality | Suboptimal study, motion artifact, poor technique |
| Clinical information | Inadequate history, misleading indication |
| Prior availability | No priors available that would have aided interpretation |
| Subspecialty expertise | Case outside reviewer's primary subspecialty |
| Technology | Workstation display, hanging protocol, CAD availability |
| Time pressure | Stat reads, stroke code, trauma activation |
| Trigger | Review Scope | Timeline |
|---|---|---|
| Clinical discrepancy reported by referring provider | Specific case + 5–10 similar recent cases | Within 48 hours |
| Surgical/pathology discordance | Specific case + related studies | Within 1 week |
| Pattern identified in routine RADPEER | Targeted review of similar case types (50+ cases) | Within 1 month |
| Near-miss or sentinel event | Root-cause analysis with full case reconstruction | Immediate |
| FPPE (new or re-credentialed radiologist) | First 50–100 cases across modalities | First 3–6 months |
| Metric | Calculation | Benchmark |
|---|---|---|
| Agreement rate | Score 1 / Total reviewed cases | >95% (RADPEER national aggregate) |
| Clinically significant discrepancy rate | (2b + 3b) / Total reviewed cases | <3% (ACR benchmark) |
| Major discrepancy rate | (3a + 3b) / Total reviewed cases | <1% |
| Score distribution | Percentage in each RADPEER category | Compare to department and national averages |
| Metric | Purpose |
|---|---|
| Overall discrepancy rate by modality | Identify modality-specific quality issues |
| Discrepancy rate by exam type | Identify challenging study types |
| Error-type distribution | Perceptual vs. cognitive vs. communication vs. system |
| Trend over time | Quarterly/annual improvement tracking |
| Communication compliance | Critical-result communication rate and timeliness |
| Addendum rate | Frequency of report amendments — high rates suggest workflow issues |
| Prelim-to-final discrepancy rate | For resident/fellow reads vs. attending final |
| Audience | Frequency | Content |
|---|---|---|
| Individual radiologist | Each reviewed case + quarterly summary | RADPEER scores, feedback, improvement suggestions |
| Department/section | Quarterly | Aggregate metrics, trends, identified patterns |
| Medical staff / credentials committee | Semi-annual (OPPE) | Individual performance summary, benchmarking |
| Hospital quality committee | Annual | Department performance, improvement initiatives |
| ACR (if DIR/QI participant) | Per program schedule | Aggregated quality data |
| Pattern | Action |
|---|---|
| Isolated 2a score | Educational feedback; no formal action |
| Recurring 2a scores in same modality | Targeted CME; case review session |
| Any 2b or 3a score | Individual case discussion; root-cause review |
| 3b score | Formal peer-review committee discussion; corrective action plan |
| Pattern of 2b+ in same area | FPPE (focused professional practice evaluation) |
| Systemic errors (multiple radiologists) | Workflow/process improvement; protocol update; technology review |
| Role | Responsibility |
|---|---|
| Chair | Leads committee meetings; reports to medical staff leadership |
| Subspecialty representatives | Review cases in their area of expertise |
| Quality coordinator | Manages database, tracks metrics, prepares reports |
| Medical director | Oversees corrective actions; interfaces with credentialing |
| Legal counsel (as needed) | Advises on peer-review protection and privilege |