Understanding when LLM reasoning models make decisions - before or during chain-of-thought. Use when discussing reasoning model interpretability, AI safety, chain-of-thought reliability, or the philosophical implications of LLM decision-making processes. Triggers on questions about "reasoning models decide first", "chain-of-thought rationalization", "LLM interpretability", or "reasoning timing".