Run AI evaluation on quarterly performance questions. Gathers evidence from daily events, builds prompt with competency context, generates summary via LLM, saves to question. Use when user says "evaluate questions" or "AI performance summary".
Uses LLM to generate summaries for quarterly connection questions based on evidence.
| Input | Type | Default | Purpose |
|---|---|---|---|
question_id | string | "" | Specific question (empty = all) |
questions.json — questions, custom_questionsdaily/*.json for evidence lookupmemory_session_log("Evaluated {saved} quarterly questions with AI", "Quarter: ...")Summary: quarter, questions evaluated, total. Per-question: evidence items, status.