Cross-reference audit for stale counts, broken links, and doc drift across all files
grep -m1 __version__ src/selectools/__init__.pygrep -m1 "version" pyproject.tomlpytest tests/ --collect-only -q 2>/dev/null | tail -1grep -c "ModelInfo(" src/selectools/models.pyls examples/*.py | wc -l | tr -d ' 'ls docs/modules/*.md | wc -l | tr -d ' 'grep -roh "@tool" src/selectools/toolbox/*.py | wc -l | tr -d ' 'python3 -c "from selectools.observer import AgentObserver; print(len([m for m in dir(AgentObserver) if m.startswith('on_')]))" 2>/dev/nullpython3 -c "from selectools.observer import AsyncAgentObserver; print(len([m for m in dir(AsyncAgentObserver) if m.startswith('a_on_')]))" 2>/dev/nullpython3 -c "from selectools.trace import StepType; print(len(StepType))" 2>/dev/nullecho "4 (File, SQLite, Redis, Supabase)"ls examples/*.py | tail -1Verify __init__.py version matches pyproject.toml version.
Search each file for hardcoded counts. Report any mismatch against live values.
Test count — check:
README.md, docs/index.md, CLAUDE.md, CONTRIBUTING.md, docs/CONTRIBUTING.md, landing/index.htmlExample count — check:
README.md, CLAUDE.mdModel count — check:
README.md, docs/index.md, CLAUDE.md, docs/modules/MODELS.mdObserver event count — check:
CLAUDE.md, docs/modules/AGENT.md, docs/ARCHITECTURE.md, docs/QUICKSTART.mdStepType count — check:
CLAUDE.md, tests/test_phase1_design_patterns.pyCLAUDE.md:
docs/ARCHITECTURE.md:
docs/modules/AGENT.md:
docs/modules/KNOWLEDGE.md:
docs/modules/TOOLS.md:
requires_approval parameter_serialize_result() behaviorVerify each v0.17.3+ feature has:
docs/modules/examples/mkdocs.yml navFeatures to check: Budget, Cancellation, Token Estimation, Model Switching, SimpleStepObserver, Structured Results, Approval Gate, Reasoning Strategies, Tool Result Caching
cp CHANGELOG.md docs/CHANGELOG.md && mkdocs build
Report any warnings.
Verify docs/CHANGELOG.md matches CHANGELOG.md:
diff CHANGELOG.md docs/CHANGELOG.md
Check .private/master-competitive-plan.md, .private/competitive-analysis.md, and .private/growth-plan.md for stale test counts, example counts, version references, and competitive scorecard accuracy.
Present results in two tables:
Count mismatches:
| Location | Field | Found | Expected | Status |
|---|
Content drift:
| Location | Issue | Status |
|---|
Only show mismatches/issues. If everything matches, say "All counts are consistent" and "No content drift detected."