City-side ADU plan review — the flip side of adu-corrections-flow. Takes a plan binder PDF + city name, reviews each sheet against code-grounded checklists, checks state and city compliance, and generates a draft corrections letter with confidence flags and reviewer blanks. Coordinates three sub-skills (california-adu for state law, adu-city-research OR a dedicated city skill for city rules, adu-targeted-page-viewer for plan extraction). Triggers when a city plan checker uploads a plan binder for AI-assisted review.
Review ADU construction plan submittals and generate a draft corrections letter. This is the city-side counterpart to the contractor-side adu-corrections-flow.
| Skill | Direction | Input | Output |
|---|---|---|---|
adu-corrections-flow | Contractor → interprets corrections | Corrections letter + plans | Contractor questions + response package |
adu-plan-review (this skill) | City → generates corrections | Plan binder + city name | Draft corrections letter |
Same domain knowledge, opposite direction.
| Skill | Role | When |
|---|---|---|
adu-targeted-page-viewer | Extract PDF → PNGs + sheet manifest |
| Phase 1 |
california-adu | State-level code compliance (28 reference files, offline) | Phase 3A |
City-specific skill OR adu-city-research | City rules — see City Routing below | Phase 3B |
The city knowledge source depends on whether the city has been onboarded:
Input: city_name
IF dedicated city skill exists (e.g., placentia-adu/):
→ Tier 3: Load city skill reference files (offline, fast, ~30 sec)
ELSE:
→ Tier 2: Run adu-city-research
→ Mode 1 (Discovery): WebSearch for city URLs (~30 sec)
→ Mode 2 (Extraction): WebFetch discovered URLs (~60-90 sec)
→ Mode 3 (Browser Fallback): Only if extraction has gaps (~2-3 min)
How to detect onboarded cities: Check for a city skill directory at skill/{city-slug}-adu/SKILL.md. If it exists, the city is onboarded. If not, fall back to web research.
Tier 1 (state law only) is always available — it's the california-adu skill. Even without any city knowledge, state law catches ~70% of common corrections.
| Input | Format | Required |
|---|---|---|
| Plan binder | PDF (full construction plan set) | Yes |
| City name | String | Yes |
| Project address | String | Recommended (improves city research) |
| Review scope | full or administrative | Optional — defaults to full |
Review scope options:
administrative — Cover sheet, sheet index, stamps/signatures, governing codes, project data. Fast (~2 min), HIGH confidence. Good for completeness screening.full — All sheet types, all check categories. Slower (~5-8 min), mixed confidence. Produces the draft corrections letter.All written to the session directory.
| Output | Format | Phase |
|---|---|---|
sheet-manifest.json | Sheet ID ↔ page mapping | Phase 1 |
sheet_findings.json | Per-sheet review findings with confidence flags | Phase 2 |
state_compliance.json | State law findings relevant to plan issues | Phase 3A |
city_compliance.json | City-specific findings (from city skill or web research) | Phase 3B |
draft_corrections.json | Draft corrections letter — the main output | Phase 4 |
review_summary.json | Stats: items found by confidence tier, review coverage, reviewer action items | Phase 4 |
Run adu-targeted-page-viewer:
project-files/pages-png/ and project-files/title-blocks/. If they exist, skip extraction and go straight to reading the cover sheet.scripts/extract-pages.sh <binder.pdf> <output-dir>sheet-manifest.json~90 seconds (or ~30 seconds if PNGs are pre-extracted). Identical to Phase 2 of adu-corrections-flow.
Review each sheet against the relevant checklist reference file. Group sheets by discipline to limit subagent count.
Subagent grouping:
| Subagent | Sheets | Checklist Reference | Priority |
|---|---|---|---|
| Architectural A | Cover sheet, floor plan(s) | checklist-cover.md, checklist-floor-plan.md | HIGH — run first |
| Architectural B | Elevations, roof plan, building sections | checklist-elevations.md | HIGH |
| Site / Civil | Site plan, grading plan, utility plan | checklist-site-plan.md | HIGH |
| Structural | Foundation, framing, structural details | checklist-structural.md | LOW — flag for reviewer |
| MEP / Energy | Plumbing, mechanical, electrical, Title 24 | checklist-mep-energy.md | MEDIUM |
Rolling window: 3 subagents in flight. Architectural A + Architectural B + Site/Civil start first. As each completes, launch the next.
Each subagent receives:
Each subagent produces: A findings array for its sheets — one entry per check, with:
check_id — Which checklist item (e.g., "1A" = architect stamp)sheet_id — Which sheet (e.g., "A1")status — PASS | FAIL | UNCLEAR | NOT_APPLICABLEvisual_confidence — HIGH | MEDIUM | LOWobservation — What the subagent actually saw (evidence)code_ref — Code section this check is grounded inFor administrative review scope: Only launch the Architectural A subagent (cover sheet + floor plan). Skip all others.
~2-3 minutes for full review (5 subagents, 3-at-a-time rolling window).
After Phase 2 completes, launch two concurrent subagents to verify findings against code.
california-aduFAIL and UNCLEAR findings from Phase 2state_compliance.json — per-finding code verification with exact citationsWhy this matters: The checklist reference files cite code sections, but the california-adu skill has the detailed rules with exceptions and thresholds. Phase 3A catches false positives — e.g., the checklist flags a 3-foot setback, but the ADU is a conversion and conversions have no setback requirement.
Route based on City Routing decision (see above).
If onboarded city (Tier 3):
If web research (Tier 2):
adu-city-research Mode 1 → Mode 2 → optional Mode 3Output: city_compliance.json — city-specific requirements, local amendments, standard details that apply to the findings
Single agent merges all inputs and produces the corrections letter.
Inputs to this phase:
sheet_findings.json (Phase 2) — what the AI found on the plansstate_compliance.json (Phase 3A) — state law verificationcity_compliance.json (Phase 3B) — city-specific rulessheet-manifest.json (Phase 1) — for sheet referencesFor each finding, apply this filter:
| Condition | Action |
|---|---|
| Finding confirmed by state AND/OR city code | Include in corrections letter with code citation |
| Finding confirmed by code but visual confidence is LOW | Include with [VERIFY] flag |
| Finding not confirmed by any code (no legal basis) | DROP IT — do not include |
| Finding relates to engineering/structural adequacy | Include as [REVIEWER: ...] blank |
| Finding requires subjective judgment | DROP IT — prohibited for ADUs per Gov. Code § 66314(b)(1) |
Output format — draft_corrections.json:
Each correction item includes:
item_number — Sequentialsection — Building, Fire/Life Safety, Site/Civil, Planning/Zoningdescription — The correction text (what needs to be fixed)code_citation — Specific code section(s)sheet_reference — Which sheet(s) are affectedconfidence — HIGH | MEDIUM | LOWvisual_confidence — How certain the AI is about the visual observationreviewer_action — CONFIRM (quick check) | VERIFY (needs closer look) | COMPLETE (reviewer must fill in)See references/output-schemas.md for full JSON schema.
Phase 4 also outputs draft_corrections.md — a formatted markdown version of the corrections letter. This markdown is the handoff to Phase 5 (PDF generation) and also serves as the frontend-renderable version.
Launch the adu-corrections-pdf skill as a sub-agent. This skill uses the document-skills/pdf primitive (reportlab, pdf-lib, pypdfium2) for the actual PDF generation. It only handles formatting — no research, no content changes.
Sub-agent skills loaded:
adu-corrections-pdf — domain formatting (letterhead, badges, sections)document-skills/pdf — PDF generation primitives (reportlab, pypdfium2, etc.)Sub-agent input:
draft_corrections.md from Phase 4Sub-agent output:
corrections_letter.pdf — Professional formatted PDF with city header, confidence badges, proper paginationqa_screenshot.png — Screenshot of page 1After the sub-agent returns the screenshot, the main agent (not the sub-agent) reviews it:
LOOP (max 2 retries):
1. View qa_screenshot.png
2. Check:
- Header correct? (city name, project info, "DRAFT" visible)
- Sections formatted? (numbered items, horizontal rules)
- Tables readable? (no overflow, columns aligned)
- No layout breaks? (text not cut off, pages not blank)
- Footer present? (page numbers, draft disclaimer)
3. IF everything looks good → Phase 5 COMPLETE
4. IF issues found → re-invoke sub-agent with fix_instructions:
- Describe what's wrong: "Table on page 2 overflows right margin"
- Sub-agent applies fix and regenerates PDF + new screenshot
- Return to step 1
Max 2 retries. If the PDF still has issues after 2 fix attempts, deliver it as-is with a note to the user. A slightly imperfect PDF is better than an infinite loop.
The draft_corrections.md serves double duty:
Both consumers get the same content. The markdown is the source of truth.
| Phase | Time | Notes |
|---|---|---|
| Phase 1 | ~90 sec | PDF extraction + manifest |
| Phase 2 | ~2-3 min | 5 subagents, 3-at-a-time rolling window |
| Phase 3A | ~60 sec | State law lookup (offline) |
| Phase 3B (Tier 3) | ~30 sec | Onboarded city — offline |
| Phase 3B (Tier 2) | ~90 sec–3 min | Web research — depends on city |
| Phase 4 | ~2 min | Merge + filter + format markdown |
| Phase 5 | ~30-60 sec | PDF generation sub-agent + QA screenshot |
| Total (Tier 3 city) | ~6-8 min | |
| Total (Tier 2 city) | ~7-10 min | |
| Administrative scope only | ~3-4 min | Cover sheet checks only |
| File | Sheet Type | Status |
|---|---|---|
references/checklist-cover.md | Cover / title sheet | Draft |
references/checklist-site-plan.md | Site plan, grading, utilities | TODO |
references/checklist-floor-plan.md | Floor plan(s) | TODO |
references/checklist-elevations.md | Elevations, roof plan, sections | TODO |
references/checklist-structural.md | Foundation, framing, details | TODO |
references/checklist-mep-energy.md | Plumbing, mechanical, electrical, Title 24 | TODO |
| File | Contents | Status |
|---|---|---|
references/output-schemas.md | JSON schemas for all output files | TODO |
references/corrections-letter-template.md | How to format the draft corrections letter | TODO |
| Skill | Role | Reference |
|---|---|---|
california-adu | State law (Phase 3A) | california-adu/AGENTS.md — 28 reference files |
adu-city-research | City rules via web (Phase 3B Tier 2) | Modes 1/2/3 in its SKILL.md |
adu-targeted-page-viewer | Plan extraction (Phase 1) | Sheet manifest workflow in its SKILL.md |
adu-corrections-pdf | PDF generation (Phase 5) | Letter format + CSS in its SKILL.md |
document-skills/pdf | PDF primitives (loaded by Phase 5 sub-agent) | reportlab, pypdfium2, pdf-lib, pypdf |
[REVIEWER: describe what needs human assessment] rather than attempting an assessment. The AI's job is the repeatable 60%, not the expert 40%.california-adu skill is the authority on state requirements.