Define the research question - Specify the educational context (K-12, higher education, professional development). Identify the intervention, outcome measures, and comparison conditions. Frame using established educational theory (constructivism, connectivism, cognitive load theory).
Study design - Select appropriate design: RCT (gold standard but often impractical), quasi-experimental (difference-in-differences, regression discontinuity), or mixed methods. Address common challenges: nested data (students within classrooms), selection bias, contamination between groups.
Data collection - Gather quantitative data (test scores, grades, completion rates, time-on-task from LMS logs) and qualitative data (surveys, interviews, observations, think-alouds). Ensure IRB approval for human subjects research.
Multilevel analysis - Use hierarchical linear modeling (HLM) to account for nested data structure (students within classrooms within schools). Report ICC (intraclass correlation) to justify multilevel approach. Include relevant covariates (prior achievement, demographics).
Effect size and practical significance - Report Cohen's d or Hedges' g for group comparisons. Use standards for education research: d = 0.2 (small), 0.4 (medium), 0.6 (large). Translate to months of learning gain for K-12 contexts (What Works Clearinghouse approach).
Evidence synthesis - Situate findings within existing evidence base. Reference systematic reviews (What Works Clearinghouse, EPPI-Centre, Campbell Collaboration). Discuss generalizability, implementation fidelity, and scalability.
Key Databases and Tools
ERIC (Education Resources Information Center) - Education literature database
What Works Clearinghouse (WWC) - Evidence reviews of education programs
PISA / TIMSS / NAEP - International and national assessment data
Google Scholar - Cross-disciplinary search
R lme4 / HLM software - Multilevel modeling
Canvas/Blackboard APIs - LMS data extraction
Output Format
Study design diagram showing groups, timeline, and measurement points.