Domain-validated guidance for designing Alternative Uses Task (AUT) experiments measuring divergent thinking, with parameters for AI-augmented and traditional conditions
This skill encodes expert methodological knowledge for designing Alternative Uses Task (AUT) experiments — the most widely used measure of divergent thinking in creativity research. It provides domain-specific parameter recommendations for stimulus selection, timing, condition design (including AI-augmented variants), online implementation, and quality control. A general-purpose programmer would not know the standard objects, timing constraints, scoring dimensions, or the critical design choices that determine whether an AUT experiment yields valid creativity data.
Before executing the domain-specific steps below, you MUST:
For detailed methodology guidance, see the research-literacy skill.
This skill was generated by AI from academic literature. All parameters, thresholds, and citations require independent verification before use in research. If you find errors, please open an issue.
The Alternative Uses Task (Guilford, 1967) asks participants to generate as many unusual uses as possible for a common everyday object within a fixed time limit. It is the standard measure of divergent thinking — the ability to generate multiple, varied, and novel ideas.
| Parameter | Default | Source |
|---|---|---|
| Time limit | 5 minutes per object | Lee & Chung, 2024; Reiter-Palmon et al., 2019 |
| Number of objects | 1-3 per session | Silvia et al., 2008 |
| Response format | Open-ended text, one use per line | Reiter-Palmon et al., 2019 |
| Instructions emphasis | "unusual, creative, uncommon" uses | Guilford, 1967; Wallach & Kogan, 1965 |
Objects should be concrete, familiar, and have many conventional uses so that departing from typical uses requires genuine creative thinking.
| Object | Commonly Used In | Source |
|---|---|---|
| Brick | Most widely validated | Guilford, 1967 |
| Paperclip | Classic Guilford item | Guilford, 1967 |
| Newspaper | Used in Lee & Chung, 2024 | Lee & Chung, 2024 |
| Cardboard box | Common alternative | Silvia et al., 2008 |
| Tin can | Common alternative | Wallach & Kogan, 1965 |
| Shoe | Frequently used | Reiter-Palmon et al., 2019 |
Avoid: Objects that are already unusual (e.g., "kaleidoscope") or that have very few conventional uses (e.g., "toothpick"). The task requires a clear baseline of common uses to depart from.
Is the study examining AI's impact on creativity?
|
+-- YES --> Include at minimum:
| 1. AI-assisted condition (e.g., ChatGPT access)
| 2. No-assistance control
| 3. [Recommended] Web search control (Lee & Chung, 2024, Exp 2A/2B)
|
+-- NO --> Standard AUT with:
1. Experimental manipulation (priming, mood, instructions)
2. Control condition (neutral or baseline)
For studying AI's impact on creativity:
| Condition | Participant Instructions | Implementation |
|---|---|---|
| ChatGPT | "You may use ChatGPT to assist you" | Embed ChatGPT in new browser tab; record interaction logs |
| Web Search | "You may use web search to assist you" | Allow Google/Bing access; record search queries |
| No Assistance | "Complete the task on your own" | Disable external tool access |
Critical design decisions:
| Parameter | Recommendation | Source |
|---|---|---|
| Platform | Qualtrics (survey) + MTurk/Prolific (recruitment) | Lee & Chung, 2024 |
| Sample size per condition | 100-200 for between-subjects AUT | Lee & Chung, 2024 (N=256 in Exp 2B) |
| Compensation | Prolific minimum + bonus for completion | Lee & Chung, 2024 |
| Estimated duration | 15-25 minutes total session | Lee & Chung, 2024 |
| Measure | Items | Duration | What It Captures | Source |
|---|---|---|---|---|
| RAT (Remote Associates Test) | 15 items | ~5 min | Convergent thinking | Mednick, 1962; Lee & Chung, 2024 |
| Creative Achievement Questionnaire | 10 domains | ~5 min | Real-world creative accomplishment | Carson et al., 2005 |
| Creative Self-Efficacy Scale | 3 items, 5-point Likert | <1 min | Belief in own creative ability | Tierney & Farmer, 2002 |
Using "creative" in instructions without care: Telling participants to "be creative" changes the scoring profile — it increases originality but may decrease fluency. Decide a priori and keep consistent across conditions (Nusbaum et al., 2014).
Confounding fluency with originality: Participants who generate more ideas statistically have a higher chance of producing rare ideas. Either control for fluency when analyzing originality, or use ratio-based measures (Silvia et al., 2008).
Not controlling for AI-generated text: In AI-augmented conditions, participants may copy-paste AI outputs. Record interaction logs and code whether responses are self-generated, AI-assisted, or directly copied (Lee & Chung, 2024).
Ignoring the web search control: Comparing ChatGPT only to no-assistance confounds AI-specific effects with general information access effects. Include a web search condition as active control (Lee & Chung, 2024, Exp 2A/2B).
Insufficient sample size for between-subjects: AUT effect sizes for condition differences are typically small-to-medium (d ≈ 0.3-0.5). Plan for N ≥ 100 per condition (Lee & Chung, 2024).
Administering multiple objects sequentially without counterbalancing: Practice effects and fatigue can confound results. Counterbalance object order across participants (Reiter-Palmon et al., 2019).
Based on Lee & Chung (2024) and Reiter-Palmon et al. (2019):
divergent-thinking-scoring skillSee references/ for detailed instruction templates and object selection guide.