>-
Context engineering curates the smallest high-signal token set for LLM tasks. The goal: maximize reasoning quality while minimizing token usage.
IMPORTANT:
| Topic | When to Use | Reference |
|---|---|---|
| Fundamentals | Understanding context anatomy, attention mechanics | context-fundamentals.md |
| Degradation | Debugging failures, lost-in-middle, poisoning | context-degradation.md |
| Optimization | Compaction, masking, caching, partitioning | context-optimization.md |
| Compression | Long sessions, summarization strategies | context-compression.md |
| Memory | Cross-session persistence, knowledge graphs | memory-systems.md |
| Multi-Agent | Coordination patterns, context isolation | multi-agent-patterns.md |
| Evaluation | Testing agents, LLM-as-Judge, metrics | evaluation.md |
| Tool Design | Tool consolidation, description engineering | tool-design.md |
| Pipelines | Project development, batch processing | project-development.md |
| Runtime Awareness | Usage limits, context window monitoring | runtime-awareness.md |
tilth_search for definition-first search (avoids grep token waste). Use tilth_read for auto-outlined file reading (skips large files automatically).[shown earlier] for previously expanded definitions.The system automatically injects usage awareness via PostToolUse hook:
<usage-awareness>
Claude Usage Limits: 5h=45%, 7d=32%
Context Window Usage: 67%
</usage-awareness>
Thresholds:
Data Sources:
https://api.anthropic.com/api/oauth/usage)/tmp/ck-context-{session_id}.json)These capabilities are built into the agent workflow — no external scripts needed.
Context optimized + token budget monitored + relevant references loaded just-in-time