Use when the PI or a human provides feedback, corrections, or direction on agent work
Process human feedback to make akari better. The human giving feedback is the PI — the authority who governs research direction, resource allocation, quality standards, and operational parameters. Their feedback is not a suggestion; it is an instruction.
Your job: understand what the PI wants, figure out what should change, make the change, and record the learning so it never needs to be said again.
If no feedback message is provided, stop immediately. Say: "No feedback provided. Usage: /feedback <what went wrong or should change>" and do nothing else.
Read the feedback message and classify it:
| Type | Signal | Example |
|---|---|---|
| Correction | "Don't do X", "X was wrong", "Stop doing X" | "Don't modify budget.yaml without approval" |
| Complaint | "X didn't work", "X is broken", "X keeps failing" |
| "Skills aren't being invoked from Slack" |
| Directive | "Always do X", "Start doing X", "X should work like Y" | "Always deploy after changing scheduler code" |
| Observation | "I noticed X", "X seems off", "Why does X happen?" | "The bot sometimes answers instead of delegating" |
| Approval | "Approve X", "Deny X", "Yes to X", "Go ahead with X" | "Approve the budget increase to 3000" |
| Resource | "Increase budget", "Spend less on X", "Reallocate" | "Increase sample-project budget to 5000 calls" |
| Strategy | "Pivot to X", "Drop project Y", "Start project Z" | "Pause sample-project, focus on akari infrastructure" |
| Knowledge | "FYI X", "We now have X", "Deadline moved to X" | "We just got access to GPT-6 API" |
| Calibration | "Quality is too low", "Be more rigorous", "Bar is wrong" | "Stop producing surface-level findings" |
| Tuning | "Bot is too verbose", "Sessions too long", "Use model X" | "Use a cheaper model for routine work cycles" |
| Schedule | "Run more often", "Pause sessions", "Add a job" | "Run work cycles every 3 hours instead of 6" |
State the feedback type and a one-sentence restatement in your own words to confirm understanding.
Quantitative check: If the feedback contains a number + comparison operator (≥, ≤, >, <, "at least", "at most"), classify as quantitative. Example: "utilization should be ≥75%" is quantitative. If quantitative, a verification mechanism is MANDATORY (per ADR 0054).
The depth of investigation depends on the feedback type.
Full investigation (correction, complaint, observation): Trace the root cause.
infra/scheduler/src/, .claude/skills/, decisions/, and CLAUDE.md. Read the actual code — do not guess.git log for recent changes to the relevant files. Check project README logs and experiment records for context.~/.scheduler/metrics/interactions.jsonl), diagnosis files (projects/<project>/diagnosis/diagnosis-*.md), and postmortems (projects/<project>/postmortem/postmortem-*.md) for similar issues.Light investigation (directive, approval, resource, strategy, knowledge, calibration, tuning, schedule): Verify feasibility and find the right files to change.
decisions/ for constraints that might conflict.Based on the feedback type, identify what should change:
| Fix type | When to use | Example |
|---|---|---|
| Code change | Behavior should be enforced deterministically | Add regex skill detection in processMessageInner |
| Convention/rule | Behavior should be followed by agents | Add deploy step to /develop skill |
| Prompt change | LLM behavior should shift | Add "FIRST: check for skills" to system prompt |
| Decision record | A policy needs to be established | "Budget changes require human approval" |
| Documentation | Knowledge needs to be captured | Add entry to project README log |
| Approval resolution | PI is deciding on a pending item | Resolve item in APPROVAL_QUEUE.md |
| Resource change | PI is adjusting budget or limits | Edit budget.yaml, add log entry |
| Project change | PI is reshaping the portfolio | Create/pause/complete project, edit README |
| Config change | PI is tuning operational parameters | Edit agent profiles, job schedules, .env |
Scope check: If the fix requires >5 files or an architectural change, use EnterPlanMode first.
Quantitative feedback requirement: If the feedback was classified as quantitative (contains number + comparison operator), the fix MUST include a verification mechanism per ADR 0054:
fleet-status.ts)health-watchdog.ts)If you cannot implement verification (e.g., no code path to measure), document why and add a task to create the measurement infrastructure.
Apply the fix. Follow the appropriate workflow for each type:
/develop skill's Iron Law)cd infra/scheduler && npm test and npx tsc --noEmitgit commit && git push && npm run build && curl -s -X POST http://localhost:8420/api/restart (graceful drain — ADR 0018)decisions/NNNN-title.mdTASKS.md (ADR task bridge — see CLAUDE.md Decisions section)TASKS.md.Mission or Done when — those are fixed at project creation and cannot be changed. Write the conflict to APPROVAL_QUEUE.md instead.[blocked-by] tags reference only conditions requiring external action, not implementation steps the agent can perform.The PI is the authority who resolves APPROVAL_QUEUE.md items. When they give approval feedback:
APPROVAL_QUEUE.md and find the matching pending item.## Pending to ## Resolved with the PI's decision (approved/denied/modified) and any notes.TASKS.md files. Change [approval-needed] to [approved: YYYY-MM-DD] using the approval date. This prevents tasks from being skipped by orient due to stale tags.The PI sets resource limits. When they direct a budget change:
budget.yaml and ledger.yaml to understand current state.budget.yaml as directed (change limits, add resource types, adjust deadlines).The PI directs research strategy. When they reshape the project portfolio:
Pause a project:
Status: paused in the project README..scheduler/jobs.json (set enabled: false).npm run build && curl -s -X POST http://localhost:8420/api/restart (graceful drain — ADR 0018).Resume/activate a project:
Status: active in the project README.Start a new project:
projects/<name>/README.md following the project README schema in CLAUDE.md (Status, Mission, Done when, Context, Log, Open questions) and projects/<name>/TASKS.md for tasks.budget.yaml if the PI specifies resource limits.projects/akari/README.md noting the new project.Complete/archive a project:
Status: completed in the project README.New external facts that change what's possible or urgent:
existing-data.md, datasets.md, or a new file if none fits).TASKS.md. New data or capabilities relevant to a project always produce at least one task (e.g., "Evaluate new X data against existing method," "Incorporate Y into training pipeline"). If you believe no task is needed, state the justification explicitly in the feedback record's Learning section.TASKS.md files.budget.yaml deadline field if applicable.The PI is raising or changing the bar:
decisions/ for established policiesAPPROVAL_QUEUE.md instead. Convention clarifications, gotcha additions, and skill improvements may be applied directly.The PI controls operational parameters:
Model changes:
infra/scheduler/src/agent.ts — AGENT_PROFILES defines model, maxTurns, maxDurationMs for each agent type (workSession, chat, autofix, deepWork).SLACK_CHAT_MODEL env var in infra/scheduler/.env.Voice/style changes:
buildChatPrompt() in infra/scheduler/src/chat.ts.Turn/duration limits:
AGENT_PROFILES in agent.ts.The PI controls when and how often agents run:
.scheduler/jobs.json to see current jobs (id, name, schedule, enabled).schedule.expr (cron) or schedule.everyMs (interval) field.enabled: true or false.Job schema in infra/scheduler/src/types.ts (id, name, schedule, payload, enabled, state).enabled: false (prefer disabling over deleting for audit trail).projects/akari/README.md with the schedule change.curl -s -X POST http://localhost:8420/api/restart (graceful drain — ADR 0018; the scheduler re-reads jobs.json on startup).pixi run validate if touching experiment recordsMANDATORY. Every feedback cycle must produce a persistent record. This is the knowledge output — it ensures the same feedback never needs to be given twice.
Create or update a file at: projects/akari/feedback/feedback-<slug>.md
Use this template:
---