Analyze a local codebase folder or GitHub repository URL and generate a grounded onboarding explainer with clear markdown docs, focused Mermaid/SVG/PNG diagrams, evidence anchors, and explanation-quality scoring. Use when users need a codebase explained in simple, concrete language for PM/design/new engineer onboarding.
Builds explanation-first repository explainers from local folders or GitHub URLs.
A good run must do all of the following:
This skill should fail quality gates if the output is generic, vague, or weakly grounded.
overview/OVERVIEW.md for the plain-language explanation.deep/*.md for architecture, modules, flows, dependencies, and glossary.diagrams/*.mmd plus rendered diagrams/svg/*.svg and diagrams/png/*.png.diagrams/excalidraw/*.excalidraw.json plus mirrored preview assets under diagrams/excalidraw/svg/*.svg and .diagrams/excalidraw/png/*.pngmeta/explanation_plan.json describing the intended narrative.meta/explanation_quality.json scoring clarity, specificity, grounding, usefulness, diagram usefulness, and honesty.meta/excalidraw_report.json proving whether editable Excalidraw scenes were created or why that export was blocked.meta/*.json for indexing, verification, confidence, attribution, and quality reports.See references/output-contract.md for exact artifacts and references/evaluation-rubric.md for the passing bar.
Run from this skill directory:
python scripts/analyze.py analyze \
--source <local_path_or_github_url> \
--output <output_dir> \
--mode <quick|standard|deep> \
--format <markdown|html|both> \
--explainer-type <onboarding|project-recap|plan-review|diff-review> \
--audience <nontech|mixed|engineering> \
--overview-length <short|medium|long> \
--since <time_window> \
--git-ref <ref> \
--plan-file <path> \
--include-glob <pattern> \
--exclude-glob <pattern> \
--enable-llm-descriptions <true|false> \
--enable-excalidraw-export <true|false> \
--enable-official-excalidraw-bridge <true|false> \
--ask-before-llm-use <true|false> \
--prompt-for-llm-key <true|false> \
--persist-llm-key <ask|true|false> \
--enable-web-enrichment <true|false>
Defaults:
mode=standardformat=markdownexplainer-type=onboardingaudience=nontechoverview-length=mediumenable-llm-descriptions=trueenable-excalidraw-export=trueenable-official-excalidraw-bridge=falseask-before-llm-use=falseprompt-for-llm-key=truepersist-llm-key=askenable-web-enrichment=truescripts/llm_describe.py.CODE_EXPLAINER_LLM_API_KEY or OPENAI_API_KEY is set, the skill can use a live model..env file for future runs.CODE_EXPLAINER_MOCK_LLM=true is only for explicit development or offline test scenarios and is not the normal production path.explanation_plan.json with top modules, audience starting points, diagram purposes, and caveats.Run the shipped self-audit:
python scripts/self_audit.py
This runs the skill on fixture repositories in assets/fixtures/, uses the grounded mock explainer path, and writes proof artifacts under .audit_tmp/code-explainer-self/.
Required:
3.10+18+ + npm--source is a GitHub URLRecommended:
mmdc) from @mermaid-js/mermaid-cli for higher-fidelity diagram renderingInstall dependencies:
powershell -ExecutionPolicy Bypass -File .\scripts\install_runtime.ps1
or
bash ./scripts/install_runtime.sh
.env file in the skill directory when the user chooses to persist the prompted LLM key.@excalidraw/mermaid-to-excalidraw bridge is opt-in only via --enable-official-excalidraw-bridge true and should be treated as a development experiment, not a required runtime dependency.references/output-contract.mdreferences/diagram-style-guide.mdreferences/persona-writing-guide.mdreferences/mode-behavior.mdreferences/evaluation-rubric.md