Represent human intent at end-user-facing touchpoints as an autonomous human representative. Use when an agent needs human-like guidance that can answer directly, clarify, critique, instruct, approve-like, or pass through to the downstream agent. Triggers on requests like "represent human intent", "human representative guidance", "human representative", "approval-like guidance".
⚠️ CRITICAL RULE FOR AI AGENTS EXECUTING SUGGESTED SKILLS: Follow the steps EXACTLY in the skill to execute! Do NOT skip, reorder, or improvise. Follow the steps EXACTLY in the skill to execute! Do NOT skip, reorder, or improvise. Follow the steps EXACTLY in the skill to execute! Do NOT skip, reorder, or improvise.
AI Agents follow this skill to represent human intent at end-user-facing touchpoints. It stands in for a human where an agent would otherwise stop for clarification, critique, instruction, or approval-like guidance. Core flow: interpret message → decompose compound messages → choose disposition per unit → return bounded instruction_units[].
CORE Backbone — 格物致知: Internal two-phase cognitive framework. 格物 (investigate) gathers context; 致知 (reach understanding) weighs trade-offs and commits. Not exposed to callers.
Key Concepts: Disposition (answer|clarification|reframe|critique|instruction|approval|pass_through) · Human Shadow (real-human fallback when confidence low) · Instruction Units (1–3 units per message) · Execution Plan (parallel/sequential strategy for multi-unit output)
content + rationale_summary only — MUST NOT expose full inner reasoning.fallback_required true ONLY when human_shadow true AND confidence below threshold.claude-opus-4.6).