This skill encodes expert methodological knowledge for designing and generating visual search arrays. A competent programmer could easily generate random stimulus displays, but without domain training they would likely violate critical constraints: items too closely spaced (causing crowding), eccentricities beyond useful vision, inappropriate set sizes that cannot distinguish search types, target-distractor similarity levels that produce ceiling or floor effects, or trial ratios that distort search behavior. This skill provides the validated parameters needed to create psychophysically sound visual search experiments.
Generating stimulus arrays with specific set sizes, spacings, and feature dimensions
Selecting target-distractor similarity levels to manipulate search efficiency
Choosing set sizes and trial structure for measuring search slopes
Configuring display timing, inter-trial intervals, and response windows
Do not use this skill when:
Skills relacionados
The task is not visual search (e.g., change detection, visual working memory, attentional capture without search)
You are analyzing existing visual search data rather than designing new experiments
The display involves naturalistic scenes rather than controlled arrays (use scene perception methods)
Research Planning Protocol
Before executing the domain-specific steps below, you MUST:
State the research question -- What specific question is this analysis/paradigm addressing?
Justify the method choice -- Why is this approach appropriate? What alternatives were considered?
Declare expected outcomes -- What results would support vs. refute the hypothesis?
Note assumptions and limitations -- What does this method assume? Where could it mislead?
Present the plan to the user and WAIT for confirmation before proceeding.
For detailed methodology guidance, see the research-literacy skill.
⚠️ Verification Notice
This skill was generated by AI from academic literature. All parameters, thresholds, and citations require independent verification before use in research. If you find errors, please open an issue.
Search Type Classification
Feature Search (Parallel / Pop-out)
Target defined by a single unique feature (Treisman & Gelade, 1980).
Search slope: < 10 ms/item for target-present trials (Wolfe, 2021)
RT x set size function: Flat or near-flat
Example: Red target among green distractors; vertical target among horizontal distractors
Theoretical basis: Pre-attentive feature maps can detect unique singletons without serial scanning (Treisman & Gelade, 1980)
Conjunction Search (Inefficient / Serial)
Target defined by a combination of features shared individually with distractors (Treisman & Gelade, 1980).
Search slope: 20-30 ms/item for target-present trials (Wolfe, 2021)
Absent:present slope ratio: Approximately 2:1 if search is self-terminating (Treisman & Gelade, 1980)
Example: Red vertical target among red horizontal and green vertical distractors
Note: Many conjunction searches are more efficient than predicted by strict serial models; guided search theory accounts for this (Wolfe, 1994)
Spatial Configuration Search
Target differs from distractors in spatial arrangement of parts rather than simple features.
Search slope: 30-50+ ms/item (Wolfe, 2021)
Example: T among Ls; 2 among 5s
These are among the most inefficient search tasks and should be used when studying attentional limits
Search Slope Classification Benchmarks
Slope (ms/item)
Classification
Citation
< 5
Highly efficient / pop-out
Wolfe, 2021
5-10
Efficient (feature-like)
Wolfe, 2021
10-20
Moderately efficient (guided)
Wolfe, 1994; Wolfe, 2021
20-30
Inefficient (conjunction-like)
Treisman & Gelade, 1980; Wolfe, 2021
> 30
Very inefficient (serial)
Wolfe, 2021
Display Parameters
Spatial Layout
Parameter
Recommended Value
Citation / Rationale
Maximum eccentricity
15 degrees of visual angle from fixation
Beyond ~15 deg, acuity drops substantially; standard upper bound (Wolfe et al., 1998)
Minimum inter-item spacing
> 1 degree center-to-center
Prevents crowding effects (Bouma, 1970: crowding zone ~ 0.5 x eccentricity)
Item size
0.5-2 degrees of visual angle
Standard range for search items (Wolfe, 2021)
Display area
Circular or rectangular region within eccentricity limit
Avoid items near monitor edges where distortion may occur
Fixation cross
Present for 500-1000 ms before array onset
Standard in visual search (Wolfe et al., 1998)
Preventing Crowding
Crowding impairs identification when flanking items are too close to the target, especially in the periphery (Pelli & Tillman, 2008).
Critical spacing: Approximately 0.5 x eccentricity (Bouma, 1970)
At 5 degrees eccentricity, items must be > 2.5 degrees apart to avoid crowding
At 10 degrees eccentricity, items must be > 5 degrees apart
For items near fixation (< 2 degrees), minimum spacing of 1 degree is sufficient
Set Sizes
Design Goal
Recommended Set Sizes
Rationale
Classify search type
4, 8, 12, 16 (minimum 3 set sizes)
Need multiple points to estimate slope reliably (Wolfe, 2021)
Test for pop-out
8, 16, 32 (wide range)
Pop-out confirmed if slope ~ 0 even at large set sizes (Treisman & Gelade, 1980)
Standard conjunction search
4, 8, 12, 16, 20
Finer-grained slope estimation (Wolfe, 1994)
Quick screening
6, 12, 18
Three evenly spaced set sizes for slope estimation
Minimum set sizes: At least 3 different set sizes are required to reliably estimate a search slope. Two set sizes cannot distinguish linear from nonlinear search functions.
Maximum set size: Constrained by display density. With 1 degree minimum spacing and 15 degree eccentricity limit, the practical maximum is approximately 40-50 items for typical item sizes (Wolfe et al., 1998).
Trial Structure
Parameter
Recommended Value
Citation
Target-present : target-absent ratio
1:1 (50% present)
Chun & Wolfe, 1996; standard in most search tasks
Low prevalence condition
10% target-present
Wolfe et al., 2005 (miss rate increases dramatically)
Trials per cell
Minimum 20-30 trials per set size x presence combination
Wolfe, 2021; more for stable RT distributions
Practice trials
10-20 trials before data collection
Standard practice
Total trial count
Typically 400-800 for a standard search task
Depends on number of conditions and set sizes
Critical warning about target prevalence: When target prevalence drops below ~25%, miss rates increase dramatically -- the "prevalence effect" (Wolfe et al., 2005). This is a critical design consideration for applied search tasks (e.g., medical image screening).
Homogeneous distractors: All distractors identical; cleanest test of T-D similarity
Heterogeneous distractors: Distractors vary in the search-relevant feature; tests the D-D similarity effect
Controlling heterogeneity: Sample distractor features from a uniform distribution within a defined range (e.g., orientation distractors drawn from 0 +/- 10 degrees; Duncan & Humphreys, 1989)
Array Generation Algorithm
Placement Algorithm (Recommended)
Define the display region (circular with radius = max eccentricity)
Generate candidate positions using one of:
Grid + jitter: Place items on a regular grid, then add random jitter (uniform, +/- 0.3 deg) to break regularity (Wolfe et al., 1998)
Random placement with rejection: Sample random positions; reject any that violate minimum spacing
Concentric rings: Place items on concentric rings at fixed eccentricities (controls eccentricity distribution)
Enforce minimum distance from fixation (> 1 degree; avoids masking by fixation cross)
Balance target position across eccentricity bins and quadrants over the experiment
For each trial, randomly assign target to one position (present trials) or assign no target (absent trials)
Randomization Constraints
Target position: Counterbalance across quadrants and eccentricity bins within each set size
Set size order: Randomize or pseudorandomize within blocks
Target presence: Pseudorandomize to avoid long runs of present or absent trials (max run length: 4 consecutive same-type trials; standard practice)
Feature assignment: For conjunction search, ensure equal numbers of each distractor type (e.g., 50% share color with target, 50% share orientation; Treisman & Gelade, 1980)
Block structure: If multiple set sizes are used, either mix within blocks or block by set size (within-block mixing is standard; Wolfe, 2021)
Common Pitfalls
Not controlling for eccentricity confounds: Larger set sizes place items at greater eccentricities on average, confounding set size with acuity. Solution: Use a fixed display area and add items by filling in gaps, not by expanding the area (Wolfe et al., 1998).
Interpreting null set-size effects as "pop-out" without verification: A flat slope does not guarantee parallel processing. Verify with brief presentations (100-200 ms + mask) and check that accuracy remains high (Treisman & Gelade, 1980).
Ignoring the prevalence effect: With low target prevalence (<25%), observers adopt a more liberal quitting threshold, increasing miss rates from ~5% to >25% (Wolfe et al., 2005). Design accordingly for applied contexts.
Using too few set sizes: Two set sizes define only a line; you cannot assess linearity or detect nonlinear search functions. Use at least 3 set sizes, preferably 4-5 (Wolfe, 2021).
Not equating luminance across color conditions: Luminance differences create an unintended pop-out cue. Always measure and equate luminance (use a photometer or validated software settings; Nagy & Sanchez, 1990).
Placing items too close together: Violating minimum spacing creates crowding, where items become unidentifiable not because of search difficulty but because of peripheral vision limits (Bouma, 1970; Pelli & Tillman, 2008).
Confounding distractor heterogeneity with target discriminability: Adding distractor variability reduces search efficiency independently of T-D similarity. Manipulate one while controlling the other (Duncan & Humphreys, 1989).
Failing to counterbalance target position: If the target systematically appears at certain locations, observers develop spatial biases. Counterbalance across quadrants and eccentricities.
Minimum Reporting Checklist
Based on current best practices in visual search research:
Search type (feature, conjunction, spatial configuration) and theoretical motivation
Set sizes used and number of trials per set size per target-presence condition
Target-present to target-absent ratio
Display parameters: eccentricity range, item size (in degrees of visual angle), minimum spacing
Item features: colors (in device-independent space), orientations (in degrees), sizes (in degrees)
Target-distractor similarity metric and value
Distractor composition (homogeneous vs. heterogeneous; how features were assigned)
Viewing distance and display specifications (size, resolution, refresh rate)
Randomization scheme: how set size, target presence, and target position were randomized
Search slope values (ms/item) with confidence intervals for target-present and target-absent
Slope ratio (absent:present) to assess self-termination
Error rates by condition (especially miss rates)
RT trimming criteria and percentage of data excluded
Software used for stimulus generation and presentation (with version)
References
Appelle, S. (1972). Perception and discrimination as a function of stimulus orientation: The "oblique effect" in man and animals. Psychological Bulletin, 78, 266-278.
Bouma, H. (1970). Interaction effects in parafoveal letter recognition. Nature, 226, 177-178.
Chun, M. M., & Wolfe, J. M. (1996). Just say no: How are visual searches terminated when there is no target present? Cognitive Psychology, 30, 39-78.
Duncan, J., & Humphreys, G. W. (1989). Visual search and stimulus similarity. Psychological Review, 96, 433-458.
Foster, D. H., & Ward, P. A. (1991). Asymmetries in oriented-line detection indicate two orthogonal filters in early vision. Proceedings of the Royal Society B, 243, 75-81.
Nachmias, J. (2011). Shape and size discrimination compared. Vision Research, 51, 400-407.
Nagy, A. L., & Sanchez, R. R. (1990). Critical color differences determined with a visual search task. Journal of the Optical Society of America A, 7, 1209-1217.
Pelli, D. G., & Tillman, K. A. (2008). The uncrowded window of object recognition. Nature Neuroscience, 11, 1129-1135.
Treisman, A. M., & Gelade, G. (1980). A feature-integration theory of attention. Cognitive Psychology, 12, 97-136.
Wolfe, J. M. (1994). Guided Search 2.0: A revised model of visual search. Psychonomic Bulletin & Review, 1, 202-238.
Wolfe, J. M. (2021). Guided Search 6.0: An updated model of visual search. Psychonomic Bulletin & Review, 28, 1060-1092.
Wolfe, J. M., Cave, K. R., & Franzel, S. L. (1989). Guided search: An alternative to the feature integration model for visual search. Journal of Experimental Psychology: Human Perception and Performance, 15, 419-433.
Wolfe, J. M., Friedman-Hill, S. R., Stewart, M. I., & O'Connell, K. M. (1992). The role of categorization in visual search for orientation. Journal of Experimental Psychology: Human Perception and Performance, 18, 34-49.
Wolfe, J. M., Horowitz, T. S., & Kenner, N. M. (2005). Rare items often missed in visual searches. Nature, 435, 439-440.
Wolfe, J. M., O'Neill, P., & Bennett, S. C. (1998). Why are there eccentricity effects in visual search? Perception & Psychophysics, 60, 140-156.
See references/array-generation-parameters.yaml for a machine-readable parameter specification.