Analyzes ADU permit corrections letters — the first half of the corrections pipeline. Reads the corrections letter, builds a sheet manifest from the plan binder, researches state and city codes, views referenced plan sheets, categorizes each correction item, and generates informed contractor questions. This skill should be used when a contractor receives a city corrections letter for an ADU permit. It coordinates three sub-skills (california-adu for state law, adu-city-research for city rules, adu-targeted-page-viewer for plan sheet navigation) to produce research artifacts and a UI-ready questions JSON. Does NOT generate the final response package — that is handled by adu-corrections-complete after the contractor answers questions. Triggers when a corrections letter PDF/PNG is provided along with the plan binder PDF.
Analyze ADU permit corrections and generate informed contractor questions. This is the first skill in a two-skill pipeline:
adu-corrections-flow (this skill) — reads corrections, researches codes, categorizes items, generates questionsadu-corrections-complete (second skill) — takes contractor answers + these research artifacts, generates the final response packageThis skill coordinates three sub-skills through a 4-phase workflow and stops after producing contractor_questions.json.
Sub-skills used:
| Skill | Role | When Used |
|---|---|---|
california-adu | State-level building codes (CRC, CBC, CPC, etc.) | Phase 3A — offline, 28 reference files |
adu-city-research | City municipal code, standard details, IBs | Phase 3B (Mode 1: Discovery) + Phase 3.5 (Mode 2: Extraction) + optional Mode 3 (Browser Fallback) |
adu-targeted-page-viewer| Sheet manifest + on-demand plan viewing |
| Phase 2 + Phase 3C — PDF extraction + vision |
Key principle: Research happens before contractor questions. Questions informed by actual code requirements are specific and answerable in seconds. Vague questions waste the contractor's time.
| Input | Format | Required |
|---|---|---|
| Corrections letter | PDF or PNG (1-3 pages) | Yes |
| Plan binder | PDF (the full construction plan set) | Yes |
| City name | String (extracted from letter if not provided) | Auto-detected |
| Project address | String (extracted from letter if not provided) | Auto-detected |
All outputs are written to the session directory (e.g., correction-01/).
| Output | Format | Phase |
|---|---|---|
corrections_parsed.json | Structured correction items | Phase 1 |
sheet-manifest.json | Sheet ID ↔ page number mapping | Phase 2 |
state_law_findings.json | Per-code-section lookups | Phase 3A |
city_discovery.json | Key URLs for the city's ADU pages | Phase 3B |
sheet_observations.json | What's on each referenced plan sheet | Phase 3C |
city_research_findings.json | Municipal code, standard details, IBs (extracted content) | Phase 3.5 |
corrections_categorized.json | Items with categories + research context (the main handoff artifact) | Phase 4 |
contractor_questions.json | UI-ready question form data | Phase 4 |
This skill stops here. The contractor_questions.json goes to the UI. After the contractor answers, the adu-corrections-complete skill takes the session directory + contractor_answers.json and generates the final response package (response letter, professional scope, corrections report, sheet annotations).
Do NOT generate Phase 5 outputs (response letter, professional scope, etc.). That is the job of adu-corrections-complete. Generating them here creates TODO-filled drafts that the second skill doesn't use.
These two phases run simultaneously — they have no dependencies on each other.
Read the corrections letter visually (1-3 page PNG or PDF). No sub-skill needed — direct vision reading.
Extract each correction item as a structured object. Preserve the exact original wording. Identify all code references (CRC, CBC, ASCE, B&P Code, municipal code, etc.) and any sheet references.
Save as corrections_parsed.json. See references/output-schemas.md for the full schema.
Run the adu-targeted-page-viewer skill workflow:
project-files/pages-png/ and project-files/title-blocks/. If they exist, skip extraction and go straight to reading the cover sheet.scripts/extract-pages.sh <binder.pdf> <output-dir>sheet-manifest.jsonThis takes ~90 seconds (or ~30 seconds if PNGs are pre-extracted) and produces the sheet-to-page mapping needed for Phase 3C.
After Phases 1+2 complete, launch three parallel research subagents. Each is specialized by domain. All receive the parsed corrections from Phase 1.
See references/subagent-prompts.md for the full subagent prompts.
california-adu (28 reference files, all offline)adu-city-research — Mode 1 (Discovery) onlycity_discovery.json — categorized URL list for extractionadu-targeted-page-viewerAfter Phase 3 completes (all three subagents return), launch city content extraction using the URLs discovered by Subagent 3B.
One subagent runs adu-city-research Mode 2 (Targeted Extraction) against all discovered URLs.
city_discovery.json + correction topicscity_research_findings.jsonSplit the discovered URLs across 2-3 subagents by topic:
Each agent runs adu-city-research Mode 2 with its URL subset. Orchestrator merges results into a single city_research_findings.json.
When to use fan-out: When Discovery returns 6+ URLs across multiple categories. For smaller cities with 2-3 URLs, single-agent is sufficient.
If Mode 2 extraction has gaps (URLs that returned empty, PDFs that couldn't be read, sections not found), launch one subagent running adu-city-research Mode 3 (Browser Fallback) with Chrome MCP.
extraction_gaps from Mode 2 outputcity_research_findings.jsonOnly run Browser Fallback if there are actionable gaps. Most cities' information is accessible via WebSearch + WebFetch. Browser Fallback is for the edge cases.
Single agent merges all three research streams and does the intelligence work.
For each correction item, cross-reference:
Then categorize:
| Category | Meaning | Example |
|---|---|---|
AUTO_FIXABLE | Resolve by adding notes, marking checklists, updating labels | Missing CalGreen item, governing codes list |
NEEDS_CONTRACTOR_INPUT | Requires specific facts from the contractor | Sewer line size, finished grade elevations |
NEEDS_PROFESSIONAL | Requires licensed professional work (designer, engineer, HERS rater) | Structural calcs, fire-rated assembly detail |
Then generate questions for NEEDS_CONTRACTOR_INPUT items. Each question includes research_context explaining why it's being asked and what the code requires. See references/output-schemas.md for the contractor_questions.json schema.
Output files: corrections_categorized.json + contractor_questions.json
Return contractor_questions.json to the UI. This skill is now complete. Stop here.
What happens next: The UI renders the questions. The contractor answers. Then the adu-corrections-complete skill takes the session directory + contractor_answers.json and generates the response package. That is a separate agent invocation — not a continuation of this one.
| Phase | Time | Notes |
|---|---|---|
| Phase 1 | ~30 sec | Vision reading, 1-3 pages |
| Phase 2 | ~90 sec | PDF extraction + manifest building |
| Phase 3A | ~60 sec | Offline reference lookup |
| Phase 3B | ~30 sec | City URL discovery (WebSearch only) |
| Phase 3C | ~60 sec | Reading 5-8 PNGs |
| Phase 3.5 | ~60-90 sec | City content extraction (WebFetch) |
| Phase 3.5-fallback | ~2-3 min | Browser fallback (only if needed) |
| Phase 4 | ~2 min | Merge + categorize + questions |
| Total (no fallback) | ~4-5 min | Typical case |
| Total (with fallback) | ~6-8 min | Difficult city website |
adu-corrections-complete.corrections_categorized.json is the main handoff to the second skill. Every item must have its research context, code findings, and sheet observations fully documented — because the second skill runs cold with no conversation history.sheet-manifest.json. Never guess.| File | Contents |
|---|---|
references/output-schemas.md | JSON schemas for all output files — corrections_parsed, contractor_questions, contractor_answers |
references/subagent-prompts.md | Full prompts for Phase 3 subagents (state law, city research, sheet viewer) + Phase 4 merge prompt |