LLM & AI
Caveman
Token-efficient communication mode. Compresses conversational prose (~65-75% output token savings) while keeping code, commands, and compliance artifacts untouched at full fidelity. Use when the user asks for "caveman mode", "brief mode", "less tokens", or when long sessions risk context pressure. Adapted from github.com/JuliusBrussee/caveman with sentinel-stack governance carve-outs.