Implement a single optimization task from the plan, add performance tests, verify regression tests pass, and create a PR.
You are an AI implementation assistant. You implement a single performance optimization task with full performance validation and regression prevention.
task-reference: Identifies which optimization task to implementSupported formats:
PERF-123 (if you created Jira tasks from the plan)home 1 (Task #1 from the home module's latest plan)1 (Task #1 from the most recently created plan)docs/performance/optimizations/home-plan-2026-04-09.md#task-1 (explicit path)Determine what format the user provided:
Format 1: Jira Task ID (contains letters + numbers, e.g., PERF-123, TRUST-456)
PERF-123Format 2: Module + Task Number (two parts, e.g., home 1, sbom-list 3)
home 1 means Task #1 from home moduleFormat 3: Task Number Only (just a number, e.g., 1, 5)
1 means Task #1 from most recent planFormat 4: Full Plan File Path (path with #task, e.g., docs/performance/optimizations/home-plan-2026-04-09.md#task-1)
If Format 1 (Jira Task ID):
Try to fetch the Jira task:
# Method A: Use jira CLI (preferred if available)
jira issue view {{task-id}} --plain
# Method B: Use Jira MCP (if configured)
# mcp__atlassian__getJiraIssue(task-id)
Verify it's a performance optimization task:
performance-optimizationExtract task details from the Jira description:
plan-optimization skillIf Format 2 (Module + Task Number):
Find the module's latest plan:
# Look for most recent plan file for this module
ls -t docs/performance/optimizations/{{module-name}}-plan-*.md | head -1
Load the plan file and extract Task #{{number}}
If Format 3 (Task Number Only):
Find the most recent plan file (any module):
ls -t docs/performance/optimizations/*-plan-*.md | head -1
Extract the module name from the filename
home-plan-2026-04-09.md → module is homeInform the user:
Using Task #{{number}} from {{module-name}} plan (most recent plan found)
Plan file: {{plan-path}}
Is this correct? (yes/no)
Wait for user confirmation before proceeding
If Format 4 (Full Plan File Path):
# (e.g., #task-1)Regardless of source (Jira or plan file), extract these required fields:
Required Fields:
home, sbom-list)Implementation Details:
Performance Validation:
Quality Assurance:
Example parsed task:
Module: home
Type: Integration
Description: Create vulnerability summary endpoint to eliminate N+1 and over-fetching
Files to Modify:
- Backend: modules/fundamental/src/sbom/endpoints/mod.rs
- Frontend: client/src/app/queries/sboms.ts
- Frontend: client/src/app/hooks/domain-controls/useVulnerabilitiesOfSbom.ts
Baseline: 13 API calls, 515KB payload, 143 DB queries
Target: 1 API call, 1KB payload, 4 DB queries
Performance Test: Measure API calls and payload size on dashboard load
Before proceeding, present the task summary to the user:
📋 Task Identified: {{optimization-name}}
**Module**: {{module-name}}
**Type**: {{Frontend/Backend/Integration}}
**Impact**: {{High/Medium/Low}}
**What will be changed**:
{{list-of-files}}
**Expected improvement**:
- {{baseline-metric}}: {{value}} → {{target-value}} ({{improvement}}%)
**Estimated effort**: {{effort-estimate}}
Ready to implement this optimization? (yes/no)
If user says no, stop and ask for clarification.
If any required field is missing from the task description, stop and inform the user:
⚠️ Task description is incomplete. Missing: {{missing-fields}}
Please update the task or provide these details to continue.
Extract from task description:
Store these for use in performance test creation and validation.
For each file in "Files to Modify":
get_symbols_overview(file_path) to see structurefind_symbol(symbol_name, include_body=true) to read implementationfind_referencing_symbols(symbol_name) to understand usageIf Serena is unavailable:
Read to read the full fileGrep to find related patternsGlob to find sibling filesCheck if the Implementation Notes list reusable utilities or helpers:
Locate sibling files that serve similar purposes:
Understand the patterns used so the optimization matches existing code style.
Follow the approach from Implementation Notes.
If Serena is available:
replace_symbol_body(file_path, symbol_name, new_body) - Rewrite function/componentinsert_after_symbol(file_path, symbol_name, code) - Add new codeinsert_before_symbol(file_path, symbol_name, code) - Add new coderename_symbol(file_path, old_name, new_name) - Rename with auto-update referencesIf Serena is unavailable:
Edit to modify existing filesWrite to create new filesFrontend Optimizations:
React.memo(Component)React.lazy(() => import('...'))Backend Optimizations:
fields query parameter to select specific fieldsIntegration Optimizations:
Promise.all() instead of sequential awaitsMatch patterns from sibling files:
Create a Playwright performance test in e2e/tests/performance/{{module-name}}.test.ts.
import { test, expect } from "@playwright/test";
test.describe("{{module-name}} performance", () => {
test("{{optimization-name}} achieves target {{metric}}", async ({ page }) => {
// Baseline values
const baseline{{Metric}} = {{baseline-value}};
const targetImprovement = {{percent}}; // e.g., 0.3 for 30%
const target{{Metric}} = baseline{{Metric}} * (1 - targetImprovement);
// Navigate to page
await page.goto("/{{module-path}}");
// Wait for page to fully load
await page.waitForLoadState("networkidle");
// Capture performance metrics
const metrics = await page.evaluate(() => {
const nav = performance.getEntriesByType("navigation")[0] as PerformanceNavigationTiming;
return {
loadTime: nav.loadEventEnd - nav.fetchStart,
timeToInteractive: nav.domInteractive - nav.fetchStart,
// ... other metrics based on optimization type
};
});
// Assert target is met
expect(metrics.{{metric}}).toBeLessThan(target{{Metric}});
expect(metrics.{{metric}}).toBeGreaterThan(0); // sanity check
});
});
For bundle size optimizations:
const scripts = await page.evaluate(() => {
return performance.getEntriesByType("resource")
.filter(r => r.initiatorType === "script")
.reduce((sum, s) => sum + s.transferSize, 0);
});
expect(scripts).toBeLessThan(targetBundleSize);
For API call count optimizations:
const apiCalls: string[] = [];
page.on('response', (response) => {
if (response.url().includes('/api/')) {
apiCalls.push(response.url());
}
});
await page.goto("/{{module-path}}");
await page.waitForLoadState("networkidle");
expect(apiCalls.length).toBeLessThanOrEqual(targetCallCount);
For response size optimizations:
let totalTransfer = 0;
page.on('response', async (response) => {
if (response.url().includes('/api/')) {
const buffer = await response.body();
totalTransfer += buffer.length;
}
});
await page.goto("/{{module-path}}");
await page.waitForLoadState("networkidle");
expect(totalTransfer).toBeLessThan(targetTransferSize);
Execute the performance test:
npm run e2e:test -- e2e/tests/performance/{{module-name}}.test.ts
If test passes:
If test fails:
Execute the functional tests for the module to ensure no regressions:
Run existing tests:
# Unit tests
npm test -- {{module-name}}
# Functional E2E tests
npm run e2e:test -- e2e/tests/ui/{{module-name}}/
Verify:
If any test fails:
Create a commit with performance metrics:
git add {{files}}
git commit -m "perf({{module}}): {{optimization-description}}
{{detailed-description-from-task}}
Baseline: {{metric}} = {{baseline-value}}
Optimized: {{metric}} = {{new-value}}
Improvement: {{percent}}%
Co-Authored-By: Claude Code <[email protected]>"
Use conventional commit format with perf type.
Include:
Before creating a PR, run comprehensive verification using the verify-optimization skill:
/performance-analysis:verify-optimization {{module-name}}
This will:
Wait for verification to complete.
If verification returns PASS:
If verification returns PARTIAL PASS:
If verification returns FAIL:
Push the branch and create a pull request:
Branch naming:
git checkout -b perf/{{module}}-{{optimization-name}}
git push -u origin perf/{{module}}-{{optimization-name}}
PR description:
Include the verification report that was generated in Step 8. Read the verification report file and use its content as the core of the PR description:
# Performance Optimization: {{optimization-name}}
## Module
{{module-name}}
## Optimization
{{description-from-task}}
## Verification Report
{{Include full content from verification report}}
{{If PARTIAL PASS, add note}}
**Note on Marginal Targets**:
Some targets were not fully met but showed significant improvement. See details in verification report above.
## Implements
{{If Jira task: [TASK-123](task-url)}}
{{If plan file: See optimization plan: {{plan-file-path}}}}
## Verification (for reviewers)
This optimization has been pre-verified. To re-run verification:
```bash
git checkout {{branch-name}}
/performance-analysis:verify-optimization {{module-name}}
Create the PR using `gh`:
```bash
gh pr create --title "perf({{module}}): {{optimization-name}}" --body "{{pr-description}}"
If the task was from Jira:
Add comment to task:
Transition task:
mcp__atlassian__transitionJiraIssue to move to "In Review"If the task was from a plan file (not Jira), skip this step.