The experimental design is the single most important determinant of an fMRI study's statistical power, interpretability, and scientific value. Choosing between block, event-related, and mixed designs involves trade-offs between detection power and estimation efficiency that depend on the research question. Similarly, the choice of inter-stimulus interval (ISI), jittering strategy, condition ordering, and trial count directly determines whether the BOLD signal of interest can be reliably detected.
A competent programmer without neuroimaging training would not know that block designs provide higher detection power but cannot estimate HRF shape, that exponentially distributed jitter is more efficient than uniform jitter, or that the BOLD response takes 12-16 seconds to return to baseline. This skill encodes those domain-specific design decisions.
When to Use This Skill
Planning a new task-based fMRI experiment
Choosing between block, event-related, or mixed designs
Optimizing inter-stimulus interval and jittering strategy
Calculating design efficiency for contrast detection
Determining minimum trial counts per condition
Integrating behavioral task constraints with fMRI timing requirements
関連 Skill
Reviewing or troubleshooting an existing fMRI task design
Research Planning Protocol
Before executing the domain-specific steps below, you MUST:
State the research question — What specific question is this analysis/paradigm addressing?
Justify the method choice — Why is this approach appropriate? What alternatives were considered?
Declare expected outcomes — What results would support vs. refute the hypothesis?
Note assumptions and limitations — What does this method assume? Where could it mislead?
Present the plan to the user and WAIT for confirmation before proceeding.
For detailed methodology guidance, see the research-literacy skill.
⚠️ Verification Notice
This skill was generated by AI from academic literature. All parameters, thresholds, and citations require independent verification before use in research. If you find errors, please open an issue.
Design Type Selection
Comparison of Design Types
Design Type
Detection Power
Estimation Efficiency
Trial-Level Analysis
Best For
Source
Block
High
Low
No
Detecting whether a region is active
Friston et al., 1999; Petersen & Dubis, 2012
Event-related (slow)
Moderate
High
Yes
Estimating HRF shape
Dale, 1999
Rapid event-related
Moderate-High
Moderate-High
Yes
Flexible trial-by-trial analysis with good power
Dale, 1999; Friston et al., 1999
Mixed (hybrid)
High (sustained) + Moderate (transient)
Moderate
Yes (transient component)
Separating sustained and transient effects
Petersen & Dubis, 2012
Decision Tree
What is the primary goal?
|
+-- Detect presence/absence of activation (localization)
| |
| +-- Is HRF shape estimation needed?
| |
| +-- NO --> Block design (maximum detection power)
| |
| +-- YES --> Mixed design (blocks + events within blocks)
|
+-- Estimate trial-by-trial neural responses
| |
| +-- Are there enough trials (>40 per condition)?
| |
| +-- YES --> Rapid event-related design (jittered ISI)
| |
| +-- NO --> Slow event-related design (ISI > 12 s)
|
+-- Separate sustained state vs. transient item effects
--> Mixed design (Petersen & Dubis, 2012)
Block Design Parameters
Optimal block duration: 15-20 seconds for maximum detection power (Maus et al., 2010; Bandettini et al., 1993). Shorter blocks (< 12 s) reduce sensitivity because the BOLD response does not reach steady state. Longer blocks (> 30 s) increase habituation and strategy effects (Poldrack et al., 2011, Ch. 3)
Minimum block duration: 12 seconds to allow the BOLD signal to reach near-plateau (Bandettini et al., 1993)
Number of blocks per condition: At least 4-6 blocks per condition per run for stable estimates (Poldrack et al., 2011, Ch. 3)
Condition alternation: Alternate conditions (ABAB or ABCABC) rather than grouping (AAABBB), which confounds condition with time (Poldrack et al., 2011, Ch. 3)
Rest blocks: Include rest/fixation blocks of at least 12-16 seconds between active blocks to allow BOLD signal return to baseline (Glover, 1999)
Event-Related Design Parameters
Inter-Stimulus Interval (ISI) and Jittering
The ISI between events is critical for statistical efficiency and BOLD signal separability.
Parameter
Recommendation
Source
Minimum ISI
2-4 seconds (for partial BOLD recovery)
Dale, 1999; Glover, 1999
Mean ISI for rapid designs
4-6 seconds
Dale, 1999
ISI range for jittered designs
2-8 seconds
Dale, 1999; Wager & Nichols, 2003
Null/fixation trials
20-33% of total events
Friston et al., 1999
Jittering strategies (from most to least recommended):
Optimized sequences: Use design optimization tools (optseq2, NeuroDesign) to maximize efficiency for specific contrasts (Dale, 1999; Durnez et al., 2017)
Truncated exponential distribution: More short ISIs, fewer long ISIs; near-optimal efficiency (Hagberg et al., 2001)
Uniform random: Equal probability across ISI range; acceptable but suboptimal
Domain warning: Jittered designs can be over 10x more efficient than fixed-ISI designs with the same mean interval (Dale, 1999). Always jitter for event-related fMRI.
HRF Timing Constraints
The BOLD hemodynamic response imposes hard constraints on fMRI design timing:
HRF peak: 4-6 seconds after neural event onset (Glover, 1999)
Return to baseline: 12-16 seconds after a brief event (Glover, 1999)
BOLD nonlinearity: Responses to stimuli separated by < 2 seconds sum nonlinearly (reduced amplitude), making them harder to separate (Glover, 1999; Wager & Nichols, 2003)
Trial Count Requirements
Design Type
Minimum Trials per Condition
Recommended Trials
Source
Event-related (detection)
20
30-50
Desmond & Glover, 2002
Event-related (HRF estimation)
30
50+
Murphy & Garavan, 2005
Rapid event-related
30
40-60
Desmond & Glover, 2002
FIR/deconvolution
40+
60+
Glover, 1999
Domain insight: These are per-condition minimums. If comparing conditions (A vs. B), each condition needs this many trials. More conditions require longer scan sessions or fewer trials per condition, creating a power trade-off.
Design Efficiency
Efficiency Calculation
Design efficiency quantifies how well a given design matrix allows detection of specific contrasts:
Detection efficiency = 1 / trace(c' * inv(X'X) * c)
where c is the contrast vector and X is the design matrix (Dale, 1999; Liu et al., 2001).
Detection vs. Estimation Trade-off
Detection power: Ability to detect whether an effect exists. Maximized by block designs and rapid event-related designs with high event density (Liu et al., 2001)
Estimation efficiency: Ability to accurately characterize the HRF shape. Maximized by jittered designs with sufficient ISI variability (Liu et al., 2001)
These are inherently in tension: block designs maximize detection but cannot estimate HRF shape
Design Optimization Tools
optseq2 (FreeSurfer): Optimizes event ordering and null events for maximum efficiency (Dale, 1999)
NeuroDesign (Python): Genetic algorithm-based optimization (Durnez et al., 2017)
fMRIpower: Power calculations accounting for design and temporal autocorrelation (Mumford & Nichols, 2008)
Accommodate scanner environment slowing (~200 ms; Haatveit et al., 2010)
Stimulus duration
0.5-4 seconds typical for visual stimuli
Long enough for perceptual processing, short enough for event separation
Common Pitfalls
Fixed ISI in event-related designs: Dramatically reduces design efficiency compared to jittered designs. Always jitter ISI for event-related fMRI (Dale, 1999)
Too few trials per condition: Fewer than 20 events per condition yields unreliable single-subject estimates (Desmond & Glover, 2002). Plan for at least 30 per condition
Ignoring HRF recovery time: Events separated by < 2 seconds produce nonlinear BOLD summation, making responses difficult to separate (Glover, 1999)
No baseline/rest periods: Without rest periods, the model cannot estimate absolute activation levels and efficiency drops substantially (Friston et al., 1999)
Confounding condition with time: Presenting all trials of one condition before another confounds the effect with scanner drift and fatigue
Not counterbalancing response mappings: Lateralized motor responses (left vs. right hand) produce motor cortex activation that confounds task effects
Ceiling/floor performance: If accuracy is near 100% or chance, there is no behavioral variance to correlate with brain activity
Not optimizing the design matrix: Using arbitrary event timing instead of optimized sequences wastes statistical power that could be gained at no additional cost
Minimum Reporting Checklist
Based on COBIDAS guidelines (Nichols et al., 2017) and Poldrack et al. (2008):
Design type (block, event-related, mixed, rapid event-related)
Block duration (for block designs) or ISI distribution parameters (for event-related)
Number of conditions and number of trials per condition
Stimulus duration and response window
Jittering strategy and ISI range (min, max, mean, distribution)
Design optimization tool used (if any) and efficiency metric
Response mapping and counterbalancing scheme
Practice procedure (in-scanner or out-of-scanner, duration)
TR and its relationship to stimulus timing
References
Bandettini, P. A., Jesmanowicz, A., Wong, E. C., & Hyde, J. S. (1993). Processing strategies for time-course data sets in functional MRI of the human brain. Magnetic Resonance in Medicine, 30(2), 161-173.
Buracas, G. T., & Boynton, G. M. (2002). Efficient design of event-related fMRI experiments using m-sequences. NeuroImage, 16(3), 801-813.
Dale, A. M. (1999). Optimal experimental design for event-related fMRI. Human Brain Mapping, 8(2-3), 109-114.
Desmond, J. E., & Glover, G. H. (2002). Estimating sample size in functional MRI (fMRI) neuroimaging studies: Statistical power analyses. Journal of Neuroscience Methods, 118(2), 115-128.
Durnez, J., Blair, R., & Poldrack, R. A. (2017). NeuroDesign: Optimal experimental designs for task fMRI. bioRxiv, 119594.
Friston, K. J., Zarahn, E., Josephs, O., Henson, R. N. A., & Dale, A. M. (1999). Stochastic designs in event-related fMRI. NeuroImage, 10(5), 607-619.
Glover, G. H. (1999). Deconvolution of impulse response in event-related BOLD fMRI. NeuroImage, 9(4), 416-429.
Haatveit, B. C., Sundet, K., Hugdahl, K., et al. (2010). The validity of d prime as a working memory index. Neuropsychology, 24(5), 629-640.
Hagberg, G. E., Zito, G., Patria, F., & Sanes, J. N. (2001). Improved detection of event-related functional MRI signals using probability functions. NeuroImage, 14(5), 1193-1205.
Liu, T. T., Frank, L. R., Wong, E. C., & Buxton, R. B. (2001). Detection power, estimation efficiency, and predictability in event-related fMRI. NeuroImage, 13(4), 759-773.
Maus, B., van Breukelen, G. J. P., Goebel, R., & Berger, M. P. F. (2010). Optimal design of multi-subject blocked fMRI experiments. NeuroImage, 51(3), 1338-1348.
Mumford, J. A., & Nichols, T. E. (2008). Power calculation for group fMRI studies accounting for arbitrary design and temporal autocorrelation. NeuroImage, 39(1), 261-268.
Murphy, K., & Garavan, H. (2005). Deriving the optimal number of events for an event-related fMRI study based on the spatial extent of activation. NeuroImage, 27(4), 771-777.
Nichols, T. E., Das, S., Eickhoff, S. B., et al. (2017). Best practices in data analysis and sharing in neuroimaging using MRI (COBIDAS). Nature Neuroscience, 20(3), 299-303.
Petersen, S. E., & Dubis, J. W. (2012). The mixed block/event-related design. NeuroImage, 62(2), 1177-1184.
Poldrack, R. A., Fletcher, P. C., Henson, R. N., et al. (2008). Guidelines for reporting an fMRI study. NeuroImage, 40(2), 409-414.
Poldrack, R. A., Mumford, J. A., & Nichols, T. E. (2011). . Cambridge University Press.
See references/ for detailed design optimization examples and parameter lookup tables.
Wager, T. D., & Nichols, T. E. (2003). Optimization of experimental design in fMRI: A general framework using a genetic algorithm. NeuroImage, 18(2), 293-309.