Claw Compactor — 6-layer token compression skill for OpenClaw agents. Cuts workspace token spend by 50–97% using deterministic rule-engines plus Engram: a real-time, LLM-driven Observational Memory system. Run at session start for automatic savings reporting.
Claw Compactor reduces token usage across the full OpenClaw workspace using 6 compression layers:
| Layer | Name | Cost | Notes |
|---|---|---|---|
| 1 | Rule Engine | Free | Dedup, strip filler, merge sections |
| 2 | Dictionary Encoding | Free | Auto-codebook, $XX substitution |
| 3 | Observation Compression | Free | Session JSONL → structured summaries |
| 4 | RLE Patterns | Free | Path/IP/enum shorthand |
| 5 | Compressed Context Protocol | Free | Format abbreviations |
| 6 | Engram | LLM API | Real-time Observational Memory |
Skill location: skills/claw-compactor/
Entry point: scripts/mem_compress.py
Engram CLI: scripts/engram_cli.py
python3 skills/claw-compactor/scripts/mem_compress.py <workspace> auto
Automatically compresses all workspace files, tracks token counts between runs, and reports savings. Run this at the start of every session.
python3 scripts/mem_compress.py <workspace> full
Runs all 5 deterministic layers in optimal order. Typical: 50%+ combined savings.
python3 scripts/mem_compress.py <workspace> benchmark
# JSON output:
python3 scripts/mem_compress.py <workspace> benchmark --json
Dry-run report showing potential savings without writing any files.
# Layer 1: Rule-based compression
python3 scripts/mem_compress.py <workspace> compress
# Layer 2: Dictionary encoding
python3 scripts/mem_compress.py <workspace> dict
# Layer 3: Observation compression (session JSONL → summaries)
python3 scripts/mem_compress.py <workspace> observe
# Layer 4: RLE pattern encoding (runs inside `compress`)
# Layer 5: Tokenizer optimization
python3 scripts/mem_compress.py <workspace> optimize
# Tiered summaries (L0/L1/L2)
python3 scripts/mem_compress.py <workspace> tiers
# Cross-file deduplication
python3 scripts/mem_compress.py <workspace> dedup
# Token count report
python3 scripts/mem_compress.py <workspace> estimate
# Workspace health check
python3 scripts/mem_compress.py <workspace> audit
--json Machine-readable JSON output
--dry-run Preview without writing files
--since DATE Filter sessions by date (YYYY-MM-DD)
--auto-merge Auto-merge duplicates (dedup command)
Engram is the flagship layer. It operates as a live engine alongside conversations, automatically compressing messages into structured, priority-annotated knowledge.
Configure via engram.yaml (recommended) or environment variables:
# engram.yaml — place in claw-compactor root