Use when developing the researcher's ability to see conceptual possibilities in data, to recognize what is important, and to give meaning to data.
Theoretical sensitivity is the researcher’s ability to see meaning in data, to discern what matters conceptually, and to imagine plausible relationships among categories—without forcing preconceived ideas onto participants’ experiences.
Glaser treats sensitivity as a craft: developed through analytic practice, disciplined comparison, and curated reading—not through importing a ready-made framework for the substantive area too early.
Use this skill when you feel “stuck” in description, when codes multiply without insight, or when you need to calibrate openness versus conceptual discipline.
Sensitivity is not:
Sensitivity is:
Your biography can sensitize you to emotional tones, organizational rhythms, or interactional subtleties.
Risk: autobiographical projection.
Mitigation: treat personal resonance as a cue to memo, then compare across cases; seek negative cases.
Prior practice in a domain (e.g., nursing, engineering, teaching) can help you notice routine expertise and tacit norms.
Risk: expert blinders (“that’s just how it is”).
Mitigation: convert expertise into questions, not answers; privilege participants’ problem-solving.
As codes mature, you become sensitized to patterns, absences, and deviance. This is the strongest engine of sensitivity in GT.
Practice: regular memo sorting and hypothesis revision.
Glaser encourages reading broadly in sociology, psychology, anthropology, philosophy of science, etc., to build a repertoire of concepts and metaphors that can suggest comparisons.
Key move: use external literature to stimulate imagination, not to name the phenomenon prematurely.
Substantive-area literature becomes more appropriate later, often as additional data for comparison after emergence stabilizes—never as a substitute for participants’ lived problem-solving.
Each session, run three comparisons:
Ask repeatedly at different grains:
Rewrite nouns into process language to reveal action/interaction.
When a strong interpretation appears, memo before expanding coding—capture scope, conditions, and counterexamples you already know.
Rewrite definitions when they become too vague (“stress”) or too literal (topic labels). Good definitions sharpen sensitivity for the next pass.
Symptom: every incident “supports” your favorite theory.
Fix: actively seek disconfirming incidents; split codes when variance appears; invite outsider debriefing.
Symptom: codes map 1:1 onto a model you imported.
Fix: remove the template from sight during early passes; rename codes using data-grounded language.
Symptom: analytic language becomes judgmental (“bad management”).
Fix: translate judgments into processual categories (managing accountability, externalizing blame)—still critical, but conceptual.
Symptom: “we already know the story.”
Fix: theoretical sampling aimed at boundaries; revisit early transcripts with new eyes.
Symptom: perfect software tags, thin thinking.
Fix: short, messy memos beat pristine code taxonomies.
Take one dense paragraph. Generate 10 distinct substantive codes (some will be wrong). Then merge aggressively after comparison. Goal: fluency and differentiation.
Highlight participant phrases that could be in vivo codes. For each, write a one-sentence translation into more general concepts.
Write three competing hypotheses for the same incident. Use next data to eliminate or revise.
Read a short unrelated theory piece. Write two memos:
(a) “What concepts might illuminate my data?”
(b) “How could this mislead me if forced?”
Maintain a running list: anomalies, silences, refusals, surprises. Revisit weekly.
memo-writing, constant-comparison, open-codingtheoretical-sampling, selective-codingglaserian-grounded-theory