Use when recording batch kernel optimization learnings to AGENT.md — iterates over N variant results, extracts failure patterns and successful optimizations, posts round summary to GitHub Issue
After each batch evolution round, iterate over all variant results, extract learnings, and record them to AGENT.md. Post a round summary to the GitHub Issue.
Invoked by pallas-evolve:start after pallas-evolve:analyze, or standalone. Expects the batch round directory iteration_{N}/variants/*/eval_result.json, along with iteration_{N}/batch_analysis.md and iteration_{N}/selection.md.
Read:
iteration_{N}/variants/*/eval_result.json — evaluation results for ALL variants in this rounditeration_{N}/batch_analysis.md — comparative analysis across variantsiteration_{N}/selection.md — lineage selection decisions (promotions, prunings)iteration_{N}/strategy.mdAGENT.md at the repo root — existing learningsIterate over each variant's result. Not every variant produces a learning. Only record when:
Failure worth recording (new pattern not already in AGENT.md):
Success worth recording (meaningful improvement):
Comparative learnings (patterns across variants or rounds):
Skip recording if:
Deduplication within the round: Multiple variants may hit the same failure or discover the same optimization. Only record each unique pattern once, noting which variants exhibited it.
AGENT.md lives at the repo root (/AGENT.md). If it doesn't exist, create it with:
# Pallas Kernel Optimization Agent Knowledge
## Failure Patterns
## Successful Optimizations
For failures, find the next available [Fxxx] number and append under ## Failure Patterns:
### [F{NNN}] {Short description}
- **Symptom**: {What the error looks like — include key error message text}
- **Root cause**: {Why it happens — the underlying Pallas/TPU constraint}
- **Fix**: {How to avoid it in future kernels}
- **First seen**: {YYYY-MM-DD}, {kernel_name} optimization
For successful optimizations, find the next [Sxxx] number and append under ## Successful Optimizations:
### [S{NNN}] {Short description of the technique}
- **Optimization**: {What was changed in the kernel code}
- **Impact**: {before_speedup}x -> {after_speedup}x on {shape}
- **Why it works**: {Root cause analysis — why this specific change improved performance}
- **Applicable when**: {Conditions where this technique should be tried again}
- **First seen**: {YYYY-MM-DD}, {kernel_name} optimization
Deduplication: Before adding, read all existing entries. If a similar pattern exists:
Batch all updates into a single AGENT.md write — do not make multiple separate edits.
Post a batch round summary comment:
gh issue comment {issue_number} --body "$(cat <<'EOF'
### Round {N} Summary
| Variant | Status | Speedup | Direction | Notable |
|---------|--------|---------|-----------|---------|
| {variant_name} | {SUCCESS/COMPILE_ERROR/INCORRECT} | {speedup}x | {direction} | {brief note} |
| ... | ... | ... | ... | ... |
**Active Lineages:** {lineage_id} ({speedup}x, {direction}), ...
**Pruned this round:** {pruned_variant_names or "none"}
**New learnings:** {[F/S NNN] entries or "none"}
EOF
)"
If AGENT.md was modified, create a single commit for all changes from this round:
git add AGENT.md
git commit -m "docs(agent): record learnings from {kernel_name} round {N}"
All learnings have been persisted:
AGENT.md — updated failure patterns and successful optimizations (committed)Do NOT compact here — this skill is typically invoked within the pallas-evolve:start loop (Phase 4), and the subsequent Phase 5 (COMPACT) handles context compression with proper state verification. Compacting mid-loop would lose orchestration context.
If invoked standalone (outside the start loop), invoke /compact after this skill completes.