Designs social impact measurement frameworks with indicators, data collection methods, and reporting templates.
Use this skill when you need to:
DO NOT use this skill for business KPI dashboards, financial ROI calculations, or customer satisfaction measurement. This is for measuring social, community, or environmental impact.
MEASURING IMPACT IS NOT ABOUT PROVING YOU DID SOMETHING — IT IS ABOUT UNDERSTANDING WHETHER WHAT YOU DID ACTUALLY CHANGED LIVES, AND USING THAT UNDERSTANDING TO DO IT BETTER.
| Input | What to Ask |
|---|
| Default |
|---|
| Program or initiative | "What program are you measuring impact for?" | No default — must be provided |
| Intended impact | "What change are you trying to create in the world?" | No default — must be provided |
| Stakeholders | "Who needs to see these impact measurements? (funders, board, public)" | Funders and board |
| Current tracking | "What data are you already collecting?" | Minimal or none |
| Resources for measurement | "What time and budget can you dedicate to data collection?" | Low — needs to be lightweight |
GATE: Confirm the brief before proceeding.
## Logic Model: [Program Name]
**Inputs** → **Activities** → **Outputs** → **Outcomes** → **Impact**
**Inputs:** [Resources invested — staff, money, materials, time]
**Activities:** [What the program does — training, services, events]
**Outputs:** [Direct products — people served, sessions delivered, materials distributed]
**Outcomes:** [Changes in participants — knowledge gained, behavior changed, conditions improved]
**Impact:** [Long-term change in the community or system]
For each outcome, define measurable indicators:
| Outcome | Indicator | Data Source | Collection Method | Frequency |
|---------|-----------|-------------|------------------|-----------|
| [Outcome 1] | [Measurable indicator] | [Where the data comes from] | [Survey, observation, records] | [Monthly, quarterly, annually] |
| [Outcome 2] | [Measurable indicator] | [Source] | [Method] | [Frequency] |
GATE: Present the logic model and indicators for approval.
For each indicator, create or recommend a collection tool:
Surveys:
## Pre/Post Survey Template
Administer at program start and end to measure change.
1. On a scale of 1-5, how confident are you in [skill]? (Pre and Post)
2. How often do you [desired behavior]? (Pre and Post)
3. What is your biggest challenge related to [topic]? (Pre — open-ended)
4. What changed for you as a result of this program? (Post — open-ended)
Tracking Sheets:
## Output Tracking
| Date | Activity | Participants | Hours | Notes |
|------|----------|-------------|-------|-------|
Interview Guide:
## Beneficiary Interview (15 minutes)
1. What was your situation before the program?
2. What did you learn or gain from participating?
3. How has your [specific area] changed since the program?
4. What would you tell someone considering this program?
5. What could we do better?
## Impact Report: [Period]
### Summary
[One-paragraph overview of impact during the period]
### Outputs
| Metric | Target | Actual |
|--------|--------|--------|
| People served | [X] | [Y] |
| Sessions delivered | [X] | [Y] |
### Outcomes
| Indicator | Baseline | Current | Change |
|-----------|----------|---------|--------|
| [Indicator 1] | [X] | [Y] | [+/-Z%] |
### Stories
[1-2 beneficiary stories illustrating the data]
### Lessons Learned
[What the data tells you about how to improve]
| When | What | Who |
|------|------|-----|
| Program start | Pre-survey, baseline data | Program staff |
| Monthly | Output tracking update | Program staff |
| Quarterly | Outcome review, beneficiary interviews | Impact lead |
| Program end | Post-survey, final data collection | Program staff |
| Annually | Annual impact analysis and report | Leadership |
- [ ] Baseline data is collected before the program starts
- [ ] Surveys use validated questions where possible
- [ ] Sample size is large enough to draw conclusions
- [ ] Data is collected consistently (same method, same timing)
- [ ] Qualitative data supplements quantitative data
- [ ] Data is stored securely and ethically
Match measurement effort to organizational capacity:
**Lightweight (1-2 hours/month):** Track 3-5 output metrics + annual survey
**Moderate (4-6 hours/month):** Outputs + quarterly outcome surveys + beneficiary interviews
**Comprehensive (10+ hours/month):** Full logic model tracking + comparison groups + longitudinal data
Logic model: Trained mentors (input) → weekly meetings (activity) → 100 youth matched (output) → improved academic confidence (outcome) → higher graduation rate (impact)
Key indicator: Pre/post survey on academic confidence (1-5 scale)
Target: 80% of participants show improvement of 1+ point
Logic model: Training curriculum + mentors (inputs) → 12-week program (activity) → 30 businesses launched (output) → increased revenue (outcome) → community economic growth (impact)
Key indicator: Average revenue change 6 months post-program
Target: 70% of participants increase revenue by 25%+