City-side ADU plan review — the flip side of adu-corrections-flow. Takes a plan binder PDF + city name, reviews each sheet against code-grounded checklists, checks state and city compliance, and generates a draft corrections letter with confidence flags and reviewer blanks. Coordinates three sub-skills (california-adu for state law, adu-city-research OR a dedicated city skill for city rules, adu-targeted-page-viewer for plan extraction). Triggers when a city plan checker uploads a plan binder for AI-assisted review.
Review ADU construction plan submittals and generate a draft corrections letter. This is the city-side counterpart to the contractor-side adu-corrections-flow.
| Skill | Direction | Input | Output |
|---|---|---|---|
adu-corrections-flow | Contractor → interprets corrections | Corrections letter + plans | Contractor questions + response package |
adu-plan-review (this skill) | City → generates corrections | Plan binder + city name | Draft corrections letter |
Same domain knowledge, opposite direction.
| Skill | Role | When |
|---|---|---|
adu-targeted-page-viewer | Extract PDF → PNGs + sheet manifest |
| Phase 1 |
california-adu | State-level code compliance (28 reference files, offline) | Phase 3A |
City-specific skill OR adu-city-research | City rules — see City Routing below | Phase 3B |
The city knowledge source depends on whether the city has been onboarded:
Input: city_name
IF dedicated city skill exists (e.g., placentia-adu/):
→ Tier 3: Load city skill reference files (offline, fast, ~30 sec)
ELSE:
→ Tier 2: Run adu-city-research
→ Mode 1 (Discovery): WebSearch for city URLs (~30 sec)
→ Mode 2 (Extraction): WebFetch discovered URLs (~60-90 sec)
→ Mode 3 (Browser Fallback): Only if extraction has gaps (~2-3 min)
How to detect onboarded cities: Check for a city skill directory at skill/{city-slug}-adu/SKILL.md. If it exists, the city is onboarded. If not, fall back to web research.
Tier 1 (state law only) is always available — it's the california-adu skill. Even without any city knowledge, state law catches ~70% of common corrections.
| Input | Format | Required |
|---|---|---|
| Plan binder | PDF (full construction plan set) | Yes |
| City name | String | Yes |
| Project address | String | Recommended (improves city research) |
| Review scope | full or administrative | Optional — defaults to full |
Review scope options:
administrative — Cover sheet, sheet index, stamps/signatures, governing codes, project data. Fast (~2 min), HIGH confidence. Good for completeness screening.full — All sheet types, all check categories. Slower (~5-8 min), mixed confidence. Produces the draft corrections letter.All written to the session directory.
| Output | Format | Phase |
|---|---|---|
sheet-manifest.json | Sheet ID ↔ page mapping | Phase 1 (pre-loaded) |
findings-arch-a.json | Architectural A findings (cover + floor plans) | Phase 2 |
findings-arch-b.json | Architectural B findings (elevations + sections) | Phase 2 |
findings-site-civil.json | Site/Civil findings (site plan + energy/code) | Phase 2 |
findings-structural.json | Structural findings (foundation + framing + details) | Phase 2 |
findings-mep-energy.json | MEP/Energy findings (plumbing + mechanical + electrical + T24) | Phase 2 |
state_compliance.json | State law findings relevant to plan issues | Phase 3A |
city_compliance.json | City-specific findings (from city skill or web research) | Phase 3B |
draft_corrections.json | Draft corrections letter — the main output | Phase 4 |
draft_corrections.md | Formatted markdown corrections letter | Phase 4 |
review_summary.json | Stats: items found by confidence tier, review coverage, reviewer action items | Phase 4 |
Run adu-targeted-page-viewer:
project-files/pages-png/ and project-files/title-blocks/. If they exist, skip extraction and go straight to reading the cover sheet.scripts/extract-pages.sh <binder.pdf> <output-dir>sheet-manifest.json~90 seconds (or ~30 seconds if PNGs are pre-extracted). Identical to Phase 2 of adu-corrections-flow.
Review each sheet against the relevant checklist reference file. Group sheets by discipline to limit subagent count. Subagents write findings to files — they do NOT return findings to the orchestrator.
Subagent grouping:
| Subagent | Sheets | Checklist Reference | Output File |
|---|---|---|---|
| arch-a | Cover sheet, floor plan(s) | checklist-cover.md, checklist-floor-plan.md | output/findings-arch-a.json |
| arch-b | Elevations, roof plan, building sections | checklist-elevations.md | output/findings-arch-b.json |
| site-civil | Site plan, grading plan, utility plan | checklist-site-plan.md | output/findings-site-civil.json |
| structural | Foundation, framing, structural details | checklist-structural.md | output/findings-structural.json |
| mep-energy | Plumbing, mechanical, electrical, Title 24 | checklist-mep-energy.md | output/findings-mep-energy.json |
Rolling window: 3 subagents in flight. arch-a + arch-b + site-civil start first. As each completes, launch the next.
Each subagent receives:
Each subagent WRITES a findings JSON file containing an array — one entry per check:
check_id — Which checklist item (e.g., "1A" = architect stamp)sheet_id — Which sheet (e.g., "A1")status — PASS | FAIL | UNCLEAR | NOT_APPLICABLEvisual_confidence — HIGH | MEDIUM | LOWobservation — What the subagent actually saw (evidence)code_ref — Code section this check is grounded inEach subagent RETURNS only a short summary (e.g., "Done, wrote 12 findings to findings-arch-a.json"). The orchestrator collects this summary via TaskOutput but does NOT read the findings file.
Orchestrator verification: After all 5 subagents complete, use Glob to verify all 5 findings-*.json files exist. Do NOT read their contents.
~2-3 minutes for full review (5 subagents, 3-at-a-time rolling window).
After Phase 2 completes, launch two concurrent subagents to verify findings against code. Both subagents read findings from disk and write results to disk.
california-adufindings-*.json files from the output directory. Focus on FAIL and UNCLEAR findings.output/state_compliance.json — per-finding code verification with exact citationsWhy this matters: The checklist reference files cite code sections, but the california-adu skill has the detailed rules with exceptions and thresholds. Phase 3A catches false positives — e.g., the checklist flags a 3-foot setback, but the ADU is a conversion and conversions have no setback requirement.
Route based on City Routing decision (see above).
If onboarded city (Tier 3):
findings-*.json files from the output directoryIf web research (Tier 2):
findings-*.json files from the output directoryadu-city-research Mode 1 → Mode 2 → optional Mode 3Output: Write output/city_compliance.json — city-specific requirements, local amendments, standard details that apply to the findings
Return: Short summary only
Orchestrator verification: After both subagents complete, use Glob to verify state_compliance.json and city_compliance.json exist. Do NOT read their contents.
This is a dedicated subagent — the orchestrator does NOT merge findings itself. The Phase 4 subagent reads all artifact files from disk and produces the corrections letter.
Subagent reads from disk:
output/findings-*.json (5 files from Phase 2) — what the AI found on the plansoutput/state_compliance.json (Phase 3A) — state law verificationoutput/city_compliance.json (Phase 3B) — city-specific rulesoutput/sheet-manifest.json (Phase 1) — for sheet referencesFor each finding, apply this filter:
| Condition | Action |
|---|---|
| Finding confirmed by state AND/OR city code | Include in corrections letter with code citation |
| Finding confirmed by code but visual confidence is LOW | Include with [VERIFY] flag |
| Finding not confirmed by any code (no legal basis) | DROP IT — do not include |
| Finding relates to engineering/structural adequacy | Include as [REVIEWER: ...] blank |
| Finding requires subjective judgment | DROP IT — prohibited for ADUs per Gov. Code § 66314(b)(1) |
Output format — draft_corrections.json:
Each correction item includes:
item_number — Sequentialsection — Building, Fire/Life Safety, Site/Civil, Planning/Zoningdescription — The correction text (what needs to be fixed)code_citation — Specific code section(s)sheet_reference — Which sheet(s) are affectedconfidence — HIGH | MEDIUM | LOWvisual_confidence — How certain the AI is about the visual observationreviewer_action — CONFIRM (quick check) | VERIFY (needs closer look) | COMPLETE (reviewer must fill in)See references/output-schemas.md for full JSON schema.
Subagent writes 3 files:
output/draft_corrections.json — structured corrections dataoutput/draft_corrections.md — formatted markdown corrections letter (primary output for frontend + PDF conversion)output/review_summary.json — stats: items by confidence tier, review coverage, reviewer action itemsReturn: Short summary only (e.g., "Done, generated 14 corrections, wrote draft_corrections.json/md + review_summary.json")
PDF generation is handled externally — after this agent completes, the server converts draft_corrections.md to PDF outside the sandbox. Do NOT attempt PDF generation. Do NOT use adu-corrections-pdf. Do NOT install reportlab, puppeteer, or any PDF tools. Your job ends at draft_corrections.md.
Orchestrator verification: After Phase 4 subagent completes, verify draft_corrections.json, draft_corrections.md, and review_summary.json exist. Do NOT read their contents.
| Phase | Time | Notes |
|---|---|---|
| Phase 1 | ~90 sec | PDF extraction + manifest |
| Phase 2 | ~2-3 min | 5 subagents, 3-at-a-time rolling window |
| Phase 3A | ~60 sec | State law lookup (offline) |
| Phase 3B (Tier 3) | ~30 sec | Onboarded city — offline |
| Phase 3B (Tier 2) | ~90 sec–3 min | Web research — depends on city |
| Phase 4 | ~2 min | Merge + filter + format markdown |
| Total (Tier 3 city) | ~5-7 min | |
| Total (Tier 2 city) | ~7-10 min | |
| Administrative scope only | ~3-4 min | Cover sheet checks only |
| File | Sheet Type | Status |
|---|---|---|
references/checklist-cover.md | Cover / title sheet | Draft |
references/checklist-site-plan.md | Site plan, grading, utilities | TODO |
references/checklist-floor-plan.md | Floor plan(s) | TODO |
references/checklist-elevations.md | Elevations, roof plan, sections | TODO |
references/checklist-structural.md | Foundation, framing, details | TODO |
references/checklist-mep-energy.md | Plumbing, mechanical, electrical, Title 24 | TODO |
| File | Contents | Status |
|---|---|---|
references/output-schemas.md | JSON schemas for all output files | TODO |
references/corrections-letter-template.md | How to format the draft corrections letter | TODO |
| Skill | Role | Reference |
|---|---|---|
california-adu | State law (Phase 3A) | california-adu/AGENTS.md — 28 reference files |
adu-city-research | City rules via web (Phase 3B Tier 2) | Modes 1/2/3 in its SKILL.md |
adu-targeted-page-viewer | Plan extraction (Phase 1) | Sheet manifest workflow in its SKILL.md |
[REVIEWER: describe what needs human assessment] rather than attempting an assessment. The AI's job is the repeatable 60%, not the expert 40%.california-adu skill is the authority on state requirements.