Track and optimize LLM token spend: estimate costs per task, recommend model tiers, monitor cumulative usage, suggest batching strategies.
Track and optimize LLM token spend: estimate costs per task, recommend model tiers, monitor cumulative usage, suggest batching strategies.
Count tokens in text input (stdin or file). Uses tiktoken-compatible estimation.
echo "Hello world" | python3 scripts/token-counter.py
python3 scripts/token-counter.py < myfile.txt
python3 scripts/token-counter.py --file myfile.txt
Map token counts to USD costs for various models.
python3 scripts/cost-estimator.py --input 1000 --output 500 --model claude-opus-4
python3 scripts/cost-estimator.py --input 1000 --output 500 # shows all models
Analyze OpenClaw usage logs to find spending patterns.
bash scripts/usage-analyzer.sh [--days 7] [--log-dir ~/.openclaw/logs]
references/model-pricing.md — Current pricing for major LLM providersreferences/token-heuristics.md — Rules of thumb for estimating tokensreferences/cost-reduction.md — Strategies to reduce token spend