Retrieve documentation context from local ai-docs. Check here first when implementing features, debugging errors, or needing library information. Fall back to web search if topic not found locally.
This skill enables efficient retrieval of documentation context from the hierarchical documentation system.
| Variable | Default | Description |
|---|---|---|
| MAX_TOKENS | 2000 | Target token budget for context loading |
| LOAD_FULL_CONTEXT | false | Use full-context.md instead of targeted pages |
| LOCAL_FIRST | true | Check ai-docs before web search |
MANDATORY - Always check local documentation before web searches.
_index.toon files for navigationIf you're about to:
full-context.md for a simple questionSTOP -> Use targeted retrieval patterns below -> Then proceed
ai-docs/libraries/_index.toon for available docs_index.tooncookbook/direct-navigation.mdcookbook/keyword-search.mdcookbook/multi-library.mdcookbook/full-context.mdWhen you know the library and topic:
1. @ai-docs/libraries/{library}/_index.toon
-> Read overview and common_tasks
2. Find matching task or section
-> Note the page path
3. @ai-docs/libraries/{library}/{section}/pages/{page}.toon
-> Get detailed summary with gotchas and patterns
Example: Need BAML retry configuration
1. @ai-docs/libraries/baml/_index.toon
-> common_tasks: "Handle errors gracefully" -> guide/error-handling
2. @ai-docs/libraries/baml/guide/pages/error-handling.toon
-> RetryPolicy syntax, gotchas about timeouts
When you're not sure which library or page:
1. @ai-docs/libraries/_index.toon
-> Scan library descriptions and keywords
2. Match your need against keywords
-> Identify candidate libraries
3. For each candidate:
-> @ai-docs/libraries/{lib}/_index.toon
-> Check if relevant content exists
4. Load specific pages from best match
Example: Need "structured output parsing"
1. @ai-docs/libraries/_index.toon
-> BAML: "Structured LLM outputs with type safety" [match]
-> MCP: "Tool integration protocol" [no match]
2. @ai-docs/libraries/baml/_index.toon
-> Confirms: type system, parsing, validation
3. Load relevant BAML pages
When task involves multiple libraries:
1. List all libraries involved in task
2. For each library:
-> Load _index.toon
-> Identify relevant pages
-> Load page summaries
3. Consolidate into single context block
4. OR: Spawn docs-context-gatherer agent
When you need comprehensive understanding:
@ai-docs/libraries/{library}/full-context.md
Use sparingly - this loads everything (~5,000-15,000 tokens)
Appropriate for:
When gathering context from multiple pages, consolidate as:
## Documentation Context
### {Library}: {Topic}
**Purpose**: {1-2 sentence purpose}
**Key Points**:
- {concept 1}
- {concept 2}
**Gotchas**:
- {warning 1}
- {warning 2}
**Pattern**:
```{language}
{minimal code example}
...
Sources: {list of page paths loaded} Tokens: ~{estimate}
## Budget Management
### Token Estimates by File Type
| File Type | Typical Size |
|-----------|--------------|
| `_index.toon` (category) | 100-150 tokens |
| `_index.toon` (library) | 150-250 tokens |
| `_index.toon` (section) | 100-200 tokens |
| `pages/*.toon` | 250-450 tokens |
| `full-context.md` | 5,000-15,000 tokens |
### Budget Guidelines
| Task Type | Target Budget | Loading Strategy |
|-----------|---------------|------------------|
| Quick fix | 300-500 | 1 page summary |
| Single feature | 800-1,200 | 2-3 page summaries |
| Integration | 1,500-2,500 | Library index + 4-6 pages |
| Multi-library | 2,000-4,000 | Multiple library indexes + key pages |
| Full context | 5,000+ | full-context.md |
### Efficiency Tips
1. **Index files are cheap navigation** - Read them freely
2. **Page summaries are high-signal** - Designed for this purpose
3. **Gotchas prevent expensive mistakes** - Always worth loading
4. **Code patterns are copy-paste ready** - High value per token
5. **full-context.md is last resort** - Use targeted loading first
## Common Retrieval Scenarios
### Scenario: Implementing a Feature
### Scenario: Debugging an Error
### Scenario: Spawning Sub-Agent
### Scenario: Uncertain Which Library
### Scenario: AI Tool Documentation
When you need information about AI tools (Claude Code, BAML, MCP, TOON, etc.):
Check local ai-docs FIRST: @ai-docs/libraries/claude-code/_index.toon @ai-docs/libraries/baml/_index.toon @ai-docs/libraries/toon/_index.toon
Navigate using same patterns as any library: -> Find section in _index.toon -> Load relevant page summaries -> Use full-context.md for comprehensive needs
Fall back to web search/fetch when:
**Why local first:**
- Faster (no network round-trip)
- Curated context (TOON format optimized for LLMs)
- Gotchas pre-extracted
- Token-efficient vs. full web pages
**When to web search:**
- Topic not found after checking local index
- Need current/live information
- User explicitly asks for latest from web
## Anti-Patterns
### Don't: Load full-context.md for Simple Questions
**Bad**: Load 15K tokens to answer "what's the retry syntax?"
**Good**: Navigate to specific page, load ~400 tokens
### Don't: Skip Documentation
**Bad**: "I probably remember how this works..."
**Good**: Take 30 seconds to load relevant page
### Don't: Re-Navigate in Sub-Agents
**Bad**: Each sub-agent navigates from scratch
**Good**: Parent loads context, passes to sub-agents
### Don't: Load Everything "Just in Case"
**Bad**: Load all libraries mentioned anywhere
**Good**: Load specific pages for specific needs
## Integration with Protocol
This skill implements the retrieval portions of:
`.claude/ai-dev-kit/protocols/docs-management.md`
Always follow the protocol's decision flow:
1. Task Analysis -> Identify libraries
2. Documentation Check -> Verify docs exist
3. Context Loading -> Use this skill's patterns
4. Execute with Context -> Proceed with task