Advises on when to use DDM vs. LBA vs. race models for choice-RT data based on experimental design and research goals
This skill encodes expert knowledge for selecting among evidence accumulation models (EAMs) when analyzing choice response-time (RT) data. A competent programmer without cognitive science training would typically analyze only mean RT and accuracy separately, missing the critical insight that RT distributions and speed-accuracy tradeoffs carry rich information about latent cognitive processes. Selecting the wrong EAM -- or applying one when the data violate its assumptions -- leads to uninterpretable or misleading parameter estimates.
Use this skill when:
Do not use this skill when:
Before executing the domain-specific steps below, you MUST:
For detailed methodology guidance, see the research-literacy skill.
This skill was generated by AI from academic literature. All parameters, thresholds, and citations require independent verification before use in research. If you find errors, please open an issue.
All evidence accumulation models share a common framework: on each trial, noisy evidence is accumulated over time until a decision boundary is reached, triggering a response. The models differ in their assumptions about accumulation architecture.
| Parameter | Cognitive Interpretation | Typical Manipulation |
|---|---|---|
| Drift rate (v) | Quality/rate of evidence extraction | Stimulus difficulty, S/N ratio (Ratcliff & McKoon, 2008) |
| Boundary separation (a) | Speed-accuracy tradeoff / response caution | Speed vs. accuracy instructions (Ratcliff & Rouder, 1998) |
| Non-decision time (Ter / t0) | Encoding + motor execution time | Response modality, stimulus quality (Ratcliff & McKoon, 2008) |
| Starting point (z) | Prior bias toward one response | Prior probability, payoff asymmetry (Ratcliff, 1985) |
| Drift rate variability (eta/sv) | Across-trial variability in evidence quality | Individual or item differences (Ratcliff, 1978) |
| Non-decision time variability (st0) | Variability in encoding/motor processes | (Ratcliff & Tuerlinckx, 2002) |
How many response alternatives does the task have?
|
+-- TWO alternatives
| |
| +-- Do you need full distributional analysis?
| | |
| | +-- YES --> Do you have sufficient trial counts (>50/condition)?
| | | |
| | | +-- YES --> Use the FULL DIFFUSION MODEL (DDM)
| | | | (Ratcliff, 1978; Ratcliff & McKoon, 2008)
| | | |
| | | +-- NO (fewer trials) --> Use EZ-DIFFUSION
| | | (Wagenmakers et al., 2007)
| | |
| | +-- NO (means/summaries sufficient)
| | --> Use EZ-DIFFUSION for simplicity
| | (Wagenmakers et al., 2007)
| |
| +-- Is response bias (starting point) a key research question?
| |
| +-- YES --> Use FULL DDM with z parameter free
| | (Ratcliff, 1985; White & Poldrack, 2014)
| |
| +-- NO --> DDM with z fixed at a/2 (unbiased)
|
+-- MORE THAN TWO alternatives
| |
| +-- Use the LINEAR BALLISTIC ACCUMULATOR (LBA)
| | (Brown & Heathcote, 2008)
| | or RACING DIFFUSION MODEL
| | (Tillman et al., 2020)
| |
| +-- Do accumulators need to be independent?
| |
| +-- YES --> LBA (independent accumulators by design)
| |
| +-- NO (competition matters) --> Racing diffusion
| or leaky competing accumulator (LCA; Usher & McClelland, 2001)
|
+-- SPECIAL CASES
|
+-- Extremely fast RTs (<200 ms median)?
| --> EAMs are likely inappropriate; these may be anticipatory
| responses (Luce, 1986)
|
+-- No speed pressure at all (untimed)?
| --> EAMs are inappropriate; use accuracy-based models
|
+-- Go/no-go task?
--> Use the DDM with absorbing boundary modifications
or the SSRT framework (Verbruggen & Logan, 2008)
The canonical EAM for two-choice tasks (Ratcliff, 1978; Ratcliff & McKoon, 2008).
Architecture: A single accumulator drifts between two absorbing boundaries. Evidence for option A moves the process toward the upper boundary; evidence for option B moves it toward the lower boundary.
Full DDM parameters (7 parameters; Ratcliff & Tuerlinckx, 2002):
| Parameter | Symbol | Typical Range | Role |
|---|---|---|---|
| Drift rate | v | -5 to 5 (Ratcliff & McKoon, 2008) | Evidence quality |
| Boundary separation | a | 0.5 to 2.5 (Ratcliff & McKoon, 2008) | Response caution |
| Non-decision time | Ter | 0.1 to 0.5 s (Ratcliff & McKoon, 2008) | Encoding + motor |
| Starting point | z | 0 to a (typically a/2) | Prior bias |
| Drift variability | eta (sv) | 0 to 2 (Ratcliff, 1978) | Cross-trial drift noise |
| Starting point variability | sz | 0 to a | Cross-trial bias noise |
| Non-decision variability | st0 | 0 to 0.3 s | Cross-trial Ter noise |
When to use DDM:
Key assumption: Only two response options. The DDM cannot natively handle >2 choices.
A simplified closed-form estimator for three DDM parameters (Wagenmakers et al., 2007).
Estimated parameters: v (drift rate), a (boundary separation), Ter (non-decision time).
Input: Only three summary statistics per condition -- mean RT for correct responses (MRT), variance of RT for correct responses (VRT), and accuracy (Pc).
Closed-form equations (Wagenmakers et al., 2007, Eq. 1-3; see references/ez-diffusion-formulas.md):
When to use EZ-diffusion:
Limitations:
A multi-alternative accumulator model (Brown & Heathcote, 2008).
Architecture: N independent linear accumulators (one per response option) race to a common threshold. The first accumulator to reach threshold triggers the corresponding response. Accumulation is ballistic (no within-trial noise) -- all variability comes from across-trial variation in drift rates and starting points.
Parameters per accumulator (Brown & Heathcote, 2008):
| Parameter | Symbol | Role |
|---|---|---|
| Mean drift rate | vi | Evidence accumulation rate for option i |
| Drift rate variability | s | Across-trial standard deviation of drift (often fixed to 1 for scaling) |
| Response threshold | b | Evidence needed to trigger response |
| Maximum starting point | A | Upper bound of uniform start-point distribution [0, A] |
| Non-decision time | t0 | Encoding + motor time |
When to use LBA:
Classical race model (Pike, 1966; Townsend & Ashby, 1983): Multiple accumulators race independently; first to finish wins. Unlike DDM, there is no competition between accumulators.
When to use:
Limitation: The standard race model cannot account for speed-accuracy tradeoff without additional assumptions (Ratcliff & McKoon, 2008).
When comparing model fits, use information criteria that penalize complexity:
| Method | When to Use | Citation |
|---|---|---|
| BIC | Frequentist model comparison; favors parsimony; appropriate for large N | Schwarz, 1978 |
| AIC | Less conservative than BIC; better for prediction | Akaike, 1974 |
| DIC | Bayesian hierarchical models (e.g., HDDM) | Spiegelhalter et al., 2002 |
| WAIC | Bayesian; more stable than DIC for hierarchical models | Watanabe, 2010 |
| Bayes factor | Direct comparison of model evidence; interpretable strength | Kass & Raftery, 1995 |
Preferred approach: Fit competing models and compare using WAIC or Bayes factors in a Bayesian framework (Annis et al., 2017). Lower WAIC = better fit.
Before interpreting fitted parameters, always conduct a parameter recovery study (Heathcote et al., 2015):
| Software | Model | Language | Citation |
|---|---|---|---|
| HDDM | DDM (hierarchical Bayesian) | Python | Wiecki et al., 2013 |
| fast-dm | DDM (frequentist, fast) | C / R wrapper | Voss & Voss, 2007 |
| EZ-diffusion | EZ | R / any | Wagenmakers et al., 2007 |
| rtdists | DDM, LBA | R | Singmann et al., 2016 |
| PyDDM | DDM (flexible extensions) | Python | Shinn et al., 2020 |
| DMC | LBA, DDM, racing diffusion | R | Heathcote et al., 2019 |
Analyzing mean RT only: Mean RT conflates drift rate, boundary separation, and non-decision time. Two conditions with identical mean RTs can have very different latent processes (Ratcliff & McKoon, 2008).
Applying DDM to >2-choice tasks: The standard DDM is defined for two-choice tasks only. For 3+ alternatives, use LBA, racing diffusion, or the multi-alternative DDM extension (Ratcliff & Starns, 2013).
Insufficient trial counts: The full DDM requires at least 40-50 trials per condition for group-level estimates and 200+ for stable individual estimates (Ratcliff & Childers, 2015; Lerche et al., 2017). With fewer trials, use EZ-diffusion or hierarchical Bayesian fitting.
Ignoring RT distribution shape: EAMs predict specific distributional forms (right-skewed). If your RT distribution is bimodal or has a long left tail, check for contaminant processes (e.g., fast guesses) before fitting (Ratcliff & Tuerlinckx, 2002).
Not trimming outlier RTs: Extremely fast (<200 ms) or slow (>3000 ms for speeded tasks) RTs likely reflect processes outside the model. Standard practice: trim RTs below 200 ms and above a task-appropriate upper bound (Ratcliff & McKoon, 2008).
Fitting too many free parameters: The full 7-parameter DDM is often overparameterized. Fix parameters that are not theoretically relevant (e.g., fix sz = 0 and st0 = 0 as a starting point; Ratcliff & Childers, 2015).
Confusing EZ-diffusion limitations: EZ-diffusion assumes no across-trial variability in drift or starting point. If your design manipulates prior probability (affecting starting point bias), EZ cannot capture this (Wagenmakers et al., 2007).
Skipping parameter recovery: Without recovery checks, you cannot know whether your data are informative for the parameters you want to interpret (Heathcote et al., 2015).
Based on Dutilh et al. (2019) and current best practices:
See references/ez-diffusion-formulas.md for EZ-diffusion closed-form equations and worked examples.