Teach meta-analysis of diagnostic test accuracy studies including sensitivity, specificity, SROC curves, and bivariate models. Use when users need to synthesize diagnostic accuracy data, understand SROC curves, or assess quality with QUADAS-2.
This skill teaches meta-analysis of diagnostic test accuracy (DTA) studies, enabling synthesis of sensitivity, specificity, and other accuracy measures across multiple studies evaluating the same diagnostic test.
Diagnostic meta-analysis differs fundamentally from intervention meta-analysis because it deals with paired accuracy measures (sensitivity and specificity) that are inherently correlated and subject to threshold effects. Specialized methods like bivariate models and SROC curves are essential.
Activate this skill when users:
The 2x2 Table:
Disease Status
+ -
Test Result + TP FP → PPV = TP/(TP+FP)
- FN TN → NPV = TN/(FN+TN)
↓ ↓
Sens=TP/(TP+FN) Spec=TN/(FP+TN)
Key Measures:
| Measure | Formula | Interpretation |
|---|---|---|
| Sensitivity | TP/(TP+FN) | Probability of positive test given disease |
| Specificity | TN/(FP+TN) | Probability of negative test given no disease |
| PPV | TP/(TP+FP) | Probability of disease given positive test |
| NPV | TN/(FN+TN) | Probability of no disease given negative test |
| LR+ | Sens/(1-Spec) | How much positive test increases disease odds |
| LR- | (1-Sens)/Spec | How much negative test decreases disease odds |
| DOR | (TP×TN)/(FP×FN) | Overall discriminative ability |
Socratic Questions:
Critical Concept: Sensitivity and specificity are inversely related through the diagnostic threshold.
Visualization:
Sensitivity
1.0 ┃●
┃ ●
┃ ●
┃ ●●
┃ ●●
┃ ●●●
0.0 ┗━━━━━━━━━━━━━
0.0 1.0
1 - Specificity
Each ● = one study
Curve = SROC (Summary ROC)
Why It Matters:
What is SROC?
Key SROC Elements:
Interpretation of AUC:
| AUC | Interpretation |
|---|---|
| 0.9-1.0 | Excellent |
| 0.8-0.9 | Good |
| 0.7-0.8 | Fair |
| 0.6-0.7 | Poor |
| 0.5-0.6 | Fail |
Bivariate Model (Reitsma et al. 2005):
HSROC Model (Rutter & Gatsonis 2001):
When to Use Each:
| Situation | Recommended Model |
|---|---|
| Summary sens/spec needed | Bivariate |
| Comparing tests at same threshold | Bivariate |
| Exploring threshold variation | HSROC |
| Few studies (<4) | Consider simpler methods |
Using mada package:
library(mada)
# Prepare data (2x2 table counts)
data <- data.frame(
study = c("Study1", "Study2", "Study3", "Study4", "Study5"),
TP = c(45, 52, 38, 61, 44),
FP = c(8, 12, 5, 15, 9),
FN = c(5, 8, 7, 9, 6),
TN = c(92, 78, 100, 65, 91)
)
# Bivariate model
fit <- reitsma(data)
summary(fit)
# Summary operating point
summary(fit)$coefficients
# SROC plot
plot(fit, sroclwd = 2,
main = "SROC Curve for Diagnostic Test X")
points(fpr(data), sens(data), pch = 19)
# Add confidence and prediction regions
plot(fit, sroclwd = 2, predict = TRUE)
Forest Plots for Sens/Spec:
# Paired forest plot
forest(fit, type = "sens", main = "Sensitivity")
forest(fit, type = "spec", main = "Specificity")
# Or use madad for separate plots
madad(data)
Using metafor for DTA:
library(metafor)
# Calculate logit sens and spec
data$yi_sens <- log(data$TP / data$FN) # logit sensitivity
data$yi_spec <- log(data$TN / data$FP) # logit specificity
# Variance (approximate)
data$vi_sens <- 1/data$TP + 1/data$FN
data$vi_spec <- 1/data$TN + 1/data$FP
# Bivariate model using rma.mv
# (More complex setup required - see metafor documentation)
QUADAS-2 Domains:
| Domain | Risk of Bias | Applicability |
|---|---|---|
| Patient Selection | ✓ | ✓ |
| Index Test | ✓ | ✓ |
| Reference Standard | ✓ | ✓ |
| Flow and Timing | ✓ | - |
Key Signaling Questions:
Patient Selection:
Index Test:
Reference Standard:
Flow and Timing:
Sources of Heterogeneity in DTA:
Investigating Heterogeneity:
# Meta-regression in bivariate model
fit_cov <- reitsma(data,
formula = cbind(tsens, tfpr) ~ covariate)
summary(fit_cov)
# Compare models
anova(fit, fit_cov)
Visual Assessment:
# ROC space plot - look for clustering
ROCellipse(data, pch = 19)
# If studies cluster in different regions,
# investigate sources of heterogeneity
Essential Elements (PRISMA-DTA):
Example Results Section:
"The bivariate meta-analysis of 12 studies (N=2,450 patients) yielded a summary sensitivity of 0.85 (95% CI: 0.79-0.90) and specificity of 0.92 (95% CI: 0.87-0.95). The positive likelihood ratio was 10.6 (95% CI: 6.8-16.5) and negative likelihood ratio was 0.16 (95% CI: 0.11-0.24). The area under the SROC curve was 0.94, indicating excellent overall accuracy. Substantial heterogeneity was observed for sensitivity (I²=78%) but not specificity (I²=32%). QUADAS-2 assessment identified high risk of bias in patient selection for 4 studies due to case-control design."
Basic: "Why can't we simply pool sensitivities using standard meta-analysis methods?"
Intermediate: "What does the prediction region on an SROC curve represent?"
Advanced: "A diagnostic MA shows high sensitivity (0.95) but moderate specificity (0.70). How would you advise using this test clinically?"
"Higher AUC always means better test"
"We should only include studies with the same threshold"
"Sensitivity and specificity are fixed properties of a test"
User: "I have 8 studies evaluating a rapid antigen test for COVID-19. How do I combine the results?"
Response Framework:
Glass (the teaching agent) MUST adapt this content to the learner:
Example Adaptations:
meta-analysis-fundamentals - Basic concepts prerequisiteheterogeneity-analysis - Understanding between-study variationdata-extraction - Extracting 2x2 tables from studiesgrade-assessment - Rating certainty of DTA evidence