Comprehensive standards, conventions, and reporting requirements for social science research. This skill covers methodology, statistical reporting, ethical guidelines, and publication standards across psychology, sociology, political science, anthropology, and education. Researchers should be able to use this as their primary reference for designing studies, analyzing data, and preparing manuscripts that meet the expectations of leading social science journals.
When to Use
When designing survey, experimental, or qualitative research in the social sciences
When preparing a manuscript following APA style (7th edition)
When reporting statistical results for psychology, sociology, political science, anthropology, or education research
When selecting and justifying a methodology (quantitative, qualitative, or mixed methods)
When navigating IRB and ethical review for human-subjects research
When choosing appropriate effect size measures and reporting conventions
When conducting or reporting factor analysis, SEM, mediation/moderation, or multilevel modeling
Skills relacionados
When completing reporting checklists (JARS, JARS-Qual, JARS-Mixed, CONSORT)
Protocol
1. Research Ethics
1.1 APA Ethics Code
The American Psychological Association Ethical Principles of Psychologists and Code of Conduct (last amended 2017) governs research involving human participants in psychology and related fields. Core principles:
Beneficence and nonmaleficence -- strive to benefit participants and take care to do no harm
Fidelity and responsibility -- establish relationships of trust; uphold professional standards
Integrity -- promote accuracy, honesty, and truthfulness in research
Justice -- ensure fair access to and benefit from research contributions
Respect for people's rights and dignity -- respect the dignity and worth of all people; protect privacy and confidentiality
Key research-specific standards (Section 8):
8.01 Institutional Approval -- obtain IRB or ethics committee approval before conducting research
8.02 Informed Consent to Research -- inform participants about the purpose, expected duration, procedures, right to decline or withdraw, foreseeable consequences of declining, potential risks, prospective benefits, limits of confidentiality, incentives, and contact information
8.03 Informed Consent for Recording -- obtain explicit consent before recording voices or images
8.04 Client/Patient, Student, and Subordinate Research Participants -- protect against adverse consequences of declining or withdrawing
8.05 Dispensing with Informed Consent -- permitted only when research would not reasonably cause distress or harm and involves normal educational practices, anonymous surveys, naturalistic observation, or archival data
8.06 Offering Inducements -- avoid excessive or inappropriate financial or other inducements
8.07 Deception in Research -- permitted only when justified by significant value and when non-deceptive alternatives are not feasible; never deceive about risks; debrief as early as feasible
8.08 Debriefing -- provide participants with information about the nature, results, and conclusions; correct any misconceptions; if delayed, take reasonable measures to reduce risk of harm
1.2 ASA Code of Ethics
The American Sociological Association Code of Ethics (revised 2018) establishes parallel standards for sociological research:
Professional competence -- engage only in work within the boundaries of competence
Integrity -- be honest and transparent in all professional activities
Professional and scientific responsibility -- adhere to the highest scientific and professional standards
Respect for people's rights, dignity, and diversity -- eliminate bias in professional activities
Social responsibility -- apply and make public knowledge in order to contribute to the public good
Key distinctions from APA:
Greater emphasis on the study of institutions, power structures, and social inequality
Explicit attention to the researcher's positionality and potential for harm in fieldwork
Stronger norms around community-based participatory research (CBPR)
1.3 IRB Requirements for Social Science Research
Social science research involving human participants requires Institutional Review Board review. Common review categories:
Exempt -- minimal risk research using educational tests, surveys, interviews, or observation of public behavior where data cannot identify participants; secondary analysis of existing de-identified data
Expedited -- minimal risk research using surveys or interviews on sensitive topics; research involving identifiable data that does not qualify for exemption
Full board -- greater than minimal risk; vulnerable populations (minors, prisoners, pregnant women, cognitively impaired individuals); deception research with more than minimal risk
Essential IRB application components:
Study protocol with rationale and research questions
Informed consent documents (written or documented waiver)
Recruitment materials and strategies
Data collection instruments (surveys, interview protocols, observation guides)
Data management and security plan (encryption, access controls, retention schedule)
Risk assessment with mitigation strategies
Debriefing materials (if deception is involved)
CITI Program or equivalent training certificates for all investigators
Conflict of interest disclosures
1.4 Participant Terminology
Social science research uses participants, not "subjects." This reflects respect for the active role individuals play in research. Additional terminology conventions:
Use respondents for survey research
Use informants or interviewees for qualitative interview research
Use community members or collaborators in participatory research
Avoid deficit-based language; use person-first or identity-first language as preferred by the community being studied
Follow APA bias-free language guidelines for age, disability, gender, race/ethnicity, sexual orientation, and socioeconomic status
1.5 Informed Consent and Debriefing
Informed consent requirements:
Statement that the study involves research and participation is voluntary
Purpose, expected duration, and procedures described in accessible language
Right to decline or withdraw at any time without penalty
Foreseeable risks, discomforts, or adverse effects
Prospective benefits to participant or society
Limits of confidentiality (e.g., mandatory reporting requirements)
Incentive details (amount, timing, proration for partial completion)
Contact information for the investigator and IRB
Debriefing requirements:
Provide the true nature of the study, especially if deception was used
Explain the scientific purpose and expected contributions
Offer to answer questions
Provide referral resources if the study addressed sensitive topics (e.g., mental health services)
Allow participants to withdraw their data after debriefing if deception was involved
Distribute debriefing form in writing (not verbal only)
2. APA Publication Manual (7th Edition) -- Style and Formatting
2.1 Manuscript Structure
Standard manuscript sections for empirical articles:
Title page -- title (no more than 12 words recommended), author names and affiliations, author note (ORCID, disclosures, correspondence)
Abstract -- 150--250 words; structured or unstructured depending on journal requirements
Introduction -- problem statement, literature review, theoretical framework, research questions/hypotheses
Method -- participants (demographics, sampling strategy, sample size justification), materials/measures (with reliability and validity evidence), procedure, data analysis plan
Results -- descriptive statistics, primary analyses, secondary/exploratory analyses; do not interpret here
Discussion -- summary of findings, relation to prior work, theoretical and practical implications, limitations, future directions
References -- APA 7th edition format
Tables and Figures -- placed after references (unless journal requires inline placement)
Construct validity -- convergent (correlations with theoretically related measures) and discriminant (low correlations with theoretically unrelated measures)
Criterion validity -- predictive (future outcomes) and concurrent (current outcomes)
Factor analysis -- confirm the underlying structure (see Section 5.2)
Response quality:
Include attention check items (e.g., "Please select 'Strongly Agree' for this item")
Monitor completion time; flag responses significantly below the median
Report the percentage of incomplete responses and how they were handled
3.2 Experimental Designs
Between-subjects -- each participant is assigned to one condition; random assignment required for causal inference; check for baseline equivalence
Within-subjects (repeated measures) -- each participant experiences all conditions; counterbalance order to control for order effects; report counterbalancing method
Mixed designs -- at least one between-subjects and one within-subjects factor; specify which factors are between and which are within
Factorial designs -- two or more independent variables; report all main effects and interactions
Quasi-experimental -- no random assignment; use matching, propensity scores, or statistical controls to address selection bias; acknowledge limitations to causal inference
Randomization:
Use computer-generated random sequences
Report the randomization method (simple, block, stratified)
For online studies, report the platform used and how randomization was implemented
Control conditions:
Active control (alternative treatment) preferred over no-treatment control when possible
Waitlist control acceptable when no-treatment comparison is needed
Report what the control group experienced
3.3 Qualitative Methods
Grounded theory:
Iterative data collection and analysis; theoretical sampling
Open coding, axial coding, selective coding (Strauss & Corbin) or initial coding and focused coding (Charmaz)
Reach theoretical saturation -- new data no longer generate new categories
Report the tradition followed (classic Glaser, Straussian, constructivist Charmaz)
Thematic analysis:
Follow the six-phase framework of Braun & Clarke (2006): familiarization, initial coding, theme generation, theme review, theme definition, report writing
Distinguish between inductive (data-driven) and deductive (theory-driven) approaches
Report whether semantic or latent themes were identified
Provide a thematic map showing relationships between themes
Ethnography:
Extended immersion in the field (typically months to years)
Participant observation, field notes, interviews
Reflexivity -- document the researcher's positionality, biases, and influence on the setting
Thick description -- provide rich, contextualized accounts
Phenomenology:
Focus on lived experience of a phenomenon
Interpretive phenomenological analysis (IPA) or descriptive phenomenology (Moustakas)
Sample sizes typically 3--25 participants
Report the bracketing or epoche process
Trustworthiness criteria (Lincoln & Guba, 1985):
Credibility -- prolonged engagement, triangulation, member checking, peer debriefing
Transferability -- thick description to enable readers to assess applicability
Convergent (parallel) -- quantitative and qualitative data collected concurrently; results merged for comparison or integration; use joint displays to present integrated findings
Explanatory sequential -- quantitative data collected first; qualitative data collected second to explain quantitative results; report how qualitative sampling was informed by quantitative findings
Exploratory sequential -- qualitative data collected first; findings used to develop a quantitative instrument or intervention; report how qualitative findings shaped quantitative measures
Advanced designs:
Intervention -- mixed methods embedded within an experimental trial
Case study -- mixed methods used within a bounded case
Participatory -- community stakeholders involved in design, data collection, and interpretation
Integration strategies:
Merging -- compare and contrast quantitative and qualitative findings
Connecting -- results of one strand inform data collection in the next
Embedding -- one strand is nested within a larger design of the other type
Use joint displays (tables or figures) to visualize integration
4. Statistical Reporting
4.1 General Principles
All statistical reporting in social science manuscripts must follow APA conventions:
Report effect sizes for every test -- never rely on p-values alone
Report exact p-values to two or three decimal places (e.g., p = .034), not p < .05; use p < .001 only when the exact value is below .001
Report 95% confidence intervals for all major estimates (means, mean differences, regression coefficients, effect sizes)
Report sample size (N or n) and degrees of freedom for every statistical test
Use APA notation: italicize statistical symbols (F, t, p, r, M, SD, N, n, df)
Zero before the decimal for statistics that can exceed 1 (e.g., M = 0.54); no zero for statistics bounded by -1 to 1 (e.g., p = .034, r = .45)
4.2 Effect Size Measures
Analysis
Effect Size
Small
Medium
Large
t-test
Cohen's d
0.2
0.5
0.8
ANOVA
Eta-squared (η²)
.01
.06
.14
ANOVA
Partial eta-squared (η²_p)
.01
.06
.14
Correlation
r
.10
.30
.50
Chi-square
Cramer's V
.10
.30
.50
Regression
R²
.02
.13
.26
Regression
f²
.02
.15
.35
Odds ratio
OR
1.5
2.5
4.3
Benchmarks from Cohen (1988); use domain-specific norms when available.
4.3 APA-Style Statistical Notation Examples
t-test:t(58) = 2.87, p = .006, d = 0.75, 95% CI [0.22, 1.28]
Chi-square:
χ²(3, N = 150) = 11.28, p = .010, V = .27
Correlation:r(98) = .34, p < .001, 95% CI [.15, .50]
Multiple regression coefficient:b = 0.42, SE = 0.12, t(196) = 3.50, p < .001, 95% CI [0.18, 0.66]
Hierarchical linear model (multilevel):b = 0.35, SE = 0.10, t(45.2) = 3.50, p < .001 (Satterthwaite approximation)
4.4 Null Hypothesis Significance Testing and Alternatives
Always interpret p-values in context of effect sizes and confidence intervals
A non-significant p-value does not demonstrate "no effect" -- discuss power and the width of the confidence interval
Consider Bayesian alternatives where appropriate: report Bayes factors (BF10) and interpret using Jeffreys' (1961) guidelines (BF10 > 3 = moderate evidence, > 10 = strong evidence)
Consider equivalence testing (TOST procedure) when the goal is to demonstrate that an effect is practically negligible
Report power analyses -- specify the software used (G*Power, R pwr package), the target effect size, alpha level, and desired power (typically .80 or .90)
5. Common Analytical Frameworks
5.1 Likert Scale Analysis
The ordinal vs. interval debate:
Ordinal treatment -- use nonparametric tests (Mann-Whitney U, Kruskal-Wallis, Spearman's rho) for individual items
Interval treatment -- composite scores from multi-item scales (summed or averaged) can generally be treated as approximately interval; parametric tests are acceptable when distributions are approximately normal
Report the approach chosen and cite methodological justification (e.g., Norman, 2010 for interval treatment of summed scores; Jamieson, 2004 for ordinal caution)
Always report the number of items, response anchors, and reliability of composite scores
5.2 Factor Analysis
Exploratory Factor Analysis (EFA):
Assess sampling adequacy: Kaiser-Meyer-Olkin (KMO) > .60; Bartlett's test of sphericity significant
Determine the number of factors: parallel analysis (preferred), scree plot, eigenvalues > 1 (Kaiser criterion -- use cautiously as it tends to over-extract)
Choose extraction method: principal axis factoring (PAF) for non-normal data; maximum likelihood (ML) for normal data
Choose rotation: oblique (promax or direct oblimin) when factors are expected to correlate; orthogonal (varimax) only when factors are theoretically uncorrelated
Interpret factor loadings: retain items with loadings > .40 on the primary factor and < .30 on cross-loadings
Report the pattern matrix (for oblique rotation) or rotated factor matrix (for orthogonal rotation), eigenvalues, and variance explained
Confirmatory Factor Analysis (CFA):
Specify the measurement model based on theory or prior EFA results
Report model fit indices: χ²/df (< 3), CFI (> .95), TLI (> .95), RMSEA (< .06, with 90% CI), SRMR (< .08)
Report standardized factor loadings (all should be > .40, ideally > .50)
Assess convergent validity: Average Variance Extracted (AVE) > .50
Assess discriminant validity: AVE for each factor exceeds the squared inter-factor correlation (Fornell-Larcker criterion)
Report modification indices only if theoretically justified modifications are made; document all post-hoc modifications
5.3 Structural Equation Modeling (SEM)
Combine measurement model (CFA) with structural model (path analysis) in a single framework
Minimum sample size: 200 or 10--20 observations per estimated parameter (Kline, 2015)
Report the same fit indices as CFA: χ², df, CFI, TLI, RMSEA (with 90% CI), SRMR
Report standardized and unstandardized path coefficients with standard errors and p-values
Use bootstrapping (minimum 5,000 samples) for indirect effects and non-normal data
Test alternative models and compare fit using Δχ² test, ΔCFI (< .01 for equivalent fit), AIC, or BIC
Report the software and estimator used (e.g., Mplus with MLR, lavaan in R with WLSMV for ordinal data)
5.4 Mediation and Moderation Analysis
Mediation:
Use the PROCESS macro (Hayes, 2022) or SEM for mediation analysis
Report the total effect (c), direct effect (c'), and indirect effect (ab)
Use bootstrapped confidence intervals (5,000--10,000 samples) for the indirect effect -- do not use the Sobel test (it assumes normality of the indirect effect, which is rarely met)
Report the completely standardized indirect effect for comparability
For multiple mediators, report specific indirect effects and total indirect effect
Moderation:
Center or standardize continuous predictors before creating interaction terms
Report the interaction term coefficient and probe the interaction at meaningful values of the moderator (e.g., -1 SD, mean, +1 SD or specific substantive values)
Plot the interaction with simple slopes; report simple slope coefficients and significance
For categorical moderators, report pairwise comparisons of slopes across groups
Use the Johnson-Neyman technique to identify the exact value of the moderator at which the effect transitions between significant and non-significant
5.5 Multilevel / Hierarchical Linear Modeling (HLM)
When data have a nested structure (students within classrooms, employees within organizations, repeated measures within individuals):
Justify the multilevel approach -- calculate the intraclass correlation coefficient (ICC); if ICC > .05, multilevel modeling is warranted
Build models sequentially:
Null model (intercept only) -- to calculate ICC
Random intercept model -- level-1 predictors with random intercepts
Random slope model -- allow slopes to vary across clusters
Cross-level interaction model -- level-2 predictors moderating level-1 effects
Report:
Fixed effects: unstandardized coefficients, standard errors, t-values (or z-values), p-values, and 95% CIs
Random effects: variance components for intercepts and slopes, and their covariance
Model comparison: deviance statistics (-2LL), AIC, BIC; likelihood ratio tests for nested models
ICC at each level
R² at each level (Snijders & Bosker pseudo-R² or Nakagawa & Schielzeth's marginal and conditional R²)
Software reporting -- specify the package (lme4 in R, HLM, Mplus, Stata mixed) and estimation method (ML or REML)
APA's JARS standards (Appelbaum et al., 2018) specify what to report in quantitative research manuscripts:
Title page and abstract:
Title reflects the variables and relationships under investigation
Abstract includes objectives, participants, methods, results (with effect sizes), and conclusions
Introduction:
Problem statement with significance
Review of relevant literature with theoretical grounding
Specific hypotheses or research questions
Method -- Participants:
Eligibility criteria and sampling method
Sample size, power analysis, and demographic characteristics
Attrition rates and reasons
Method -- Measures:
For each instrument: name, construct measured, number of items, response format, scoring procedure, reliability evidence (in the current sample), and validity evidence
Psychometric citations
Method -- Procedure:
Detailed description sufficient for replication
IRB approval and informed consent procedures
Data collection setting, dates, and duration
Method -- Data analysis:
Statistical software and version
Data screening procedures (missing data, outliers, normality)
Missing data handling method (listwise deletion, multiple imputation, FIML) with justification
Alpha level and correction for multiple comparisons (Bonferroni, Holm, Benjamini-Hochberg)
Analytic strategy mapped to each research question/hypothesis
Results:
Descriptive statistics for all study variables (means, standard deviations, ranges, or frequencies)
Correlation matrix for continuous study variables
Results for each hypothesis/research question with test statistic, degrees of freedom, exact p-value, effect size, and confidence interval
All measures described with reliability evidence in the current sample
Procedure described in sufficient detail for replication
Data analysis plan specified (including missing data handling and multiple comparison corrections)
Statistical Reporting
Descriptive statistics reported for all study variables
Effect sizes reported for every inferential test
Exact p-values reported (not p < .05 unless p < .001)
95% confidence intervals reported for major estimates
N and degrees of freedom reported for every test
APA-style notation used: italicized test statistics, proper formatting
Qualitative Reporting (if applicable)
Qualitative approach named and justified
Researcher positionality and reflexivity statement included
Data saturation or sufficiency addressed
Trustworthiness strategies described (member checking, triangulation, audit trail)
Sufficient data excerpts provided to support each theme
Mixed Methods (if applicable)
Mixed methods design named and cited
Timing, priority, and integration strategy described
Joint display or integration visualization included
Meta-inferences discussed
Reporting Standards
Appropriate JARS checklist completed (JARS, JARS-Qual, or JARS-Mixed)
CONSORT checklist completed if reporting a randomized trial
Reporting checklist submitted with manuscript
References
American Psychological Association. (2020). Publication manual of the American Psychological Association (7th ed.). https://doi.org/10.1037/0000165-000
American Psychological Association. (2017). Ethical principles of psychologists and code of conduct. https://www.apa.org/ethics/code
American Sociological Association. (2018). Code of ethics and policies and procedures of the ASA Committee on Professional Ethics. https://www.asanet.org/code-ethics
Appelbaum, M., Cooper, H., Kline, R. B., Mayo-Wilson, E., Nezu, A. M., & Rao, S. M. (2018). Journal article reporting standards for quantitative research in psychology: The APA Publications and Communications Board task force report. American Psychologist, 73(1), 3--25. https://doi.org/10.1037/amp0000191
Levitt, H. M., Bamberg, M., Creswell, J. W., Frost, D. M., Josselson, R., & Suarez-Orozco, C. (2018). Journal article reporting standards for qualitative primary, qualitative meta-analytic, and mixed methods research in psychology: The APA Publications and Communications Board task force report. American Psychologist, 73(1), 26--46. https://doi.org/10.1037/amp0000151
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Lawrence Erlbaum Associates.
Creswell, J. W., & Plano Clark, V. L. (2018). Designing and conducting mixed methods research (3rd ed.). SAGE.
Hayes, A. F. (2022). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach (3rd ed.). Guilford Press.
Kline, R. B. (2015). Principles and practice of structural equation modeling (4th ed.). Guilford Press.
Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. SAGE.
Schulz, K. F., Altman, D. G., & Moher, D. (2010). CONSORT 2010 Statement: Updated guidelines for reporting parallel group randomised trials. BMJ, 340, c332. https://doi.org/10.1136/bmj.c332
Snijders, T. A. B., & Bosker, R. J. (2012). Multilevel analysis: An introduction to basic and advanced multilevel modeling (2nd ed.). SAGE.