Fire when the user wants to review a Jira ticket against the frontend codebase, check whether a Next.js/React feature is already implemented, identify implementation gaps, assess ticket readiness, or validate frontend QA/test coverage for a ticket. Invoke with a Jira ticket ID, optionally followed by a Tambora suite name and `--qa` for QA mode. Example invocations: - `review-ticket-frontend-nextjs MPP-221` - `review-ticket-frontend-nextjs MPP-221 "MPP-150 Memories list view"` - `review-ticket-frontend-nextjs MPP-221 --qa`
Parse $ARGUMENTS as follows:
MPP-221)--qa flag (optional) = if present anywhere in the arguments, enable QA mode (see Step 8)--qa (optional) = Tambora suite name (e.g. MPP-150 Memories list view)You are performing a ticket readiness analysis for the Jira ticket ID extracted above.
The target codebase is the MyParkPlanner frontend: Next.js 16, React 19, TypeScript 5, Tailwind CSS 4, Auth.js 5, Zod 4, Vitest.
Mode detection: Check for
--qain the arguments first.
- Normal mode (no
--qa): run Steps 1–7.- QA mode (
--qapresent): run Steps 1, 2a, and 8. Skip Steps 2b–7 (deep audit, gap analysis, ticket quality check, and other normal-mode-only review steps) — those are for the frontend developer only.
Run jira issue view <ticket-id> --plain to get the full ticket details. Extract:
If the ticket has subtasks, also fetch each subtask with jira issue view <subtask-id> --plain.
Do a focused, quick scan to determine whether the feature is actually built and testable. Check:
src/app/?src/lib/api/ have a function that calls the relevant backend endpoint?Use Grep and Glob directly — this should be fast. The goal is a simple yes/no per acceptance criterion: "is the frontend ready for this to be tested?"
Present a short Frontend Readiness table in the output:
| Acceptance Criterion | Frontend Ready? | Notes |
|---|---|---|
| ... | Yes / No / Partial | ... |
If any criterion is No or Partial, flag it clearly so QA knows not to test it yet.
Based on the ticket requirements, search the codebase thoroughly for any existing implementations. Use the Agent tool with subagent_type=Explore to search broadly. Check:
src/app/ — route segments, page.tsx, layout.tsx, loading.tsx, error.tsxsrc/app/api/ — any Next.js API routes related to the featuresrc/components/ and co-located _components/ folders — existing UI elements, props, variants__stories__/ folders — whether components are documentedsrc/lib/api/ — fetcher functions, types, mockssrc/lib/hooks/ — state management, data fetching hookssrc/lib/utils/, src/lib/constants/src/types/, *.types.ts filesmiddleware.ts, src/lib/auth/, src/components/auth/__tests__/ folders, *.test.tsx files — Vitest unit and integration test coverageFor each requirement or acceptance criterion in the ticket, determine:
Identify gaps between what the ticket requires and what exists:
Review the ticket description itself for quality issues:
Normal mode — full report:
## Ticket: [TICKET-ID] — [Title]
### Summary
Brief description of what the ticket asks for.
### Status
Current ticket status and subtask statuses.
### Frontend Implementation Audit
#### What Already Exists
- List each piece of existing code with file paths and line numbers
- Group by category (pages, components, API client, hooks, types, tests, etc.)
#### What Is Missing
- List any requirements that have no frontend implementation yet
### Acceptance Criteria Coverage
| Criterion | Status | Evidence |
|-----------|--------|----------|
| ... | ... | ... |
### Ticket Quality Issues
- List any ambiguities, missing details, or open decisions
- Suggest specific improvements to the ticket description
### Recommendations
- Actionable next steps for the team
### Tambora Test Case Coverage (if suite name provided)
| Code | Title | Coverage Status | Notes |
|------|-------|-----------------|-------|
| ... | ... | ... | ... |
QA mode (--qa) — trimmed report focused on testability:
## Ticket: [TICKET-ID] — [Title]
### Summary
Brief description of what the ticket asks for.
### Status
Current ticket status.
### Frontend Readiness
| Acceptance Criterion | Frontend Ready? | Notes |
|----------------------|----------------|-------|
| ... | Yes / No / Partial | ... |
> If any row is No or Partial, warn QA not to test those scenarios yet.
### Tambora Test Cases — [Suite Name]
| Code | Title | Severity | Testable? |
|------|-------|----------|-----------|
| ... | ... | ... | Yes / No / Partial |
The Testable? column cross-references each test case against the Frontend Readiness table — if the frontend isn't ready for a given scenario, mark it as not testable yet.
Then proceed directly to Step 8 to record results.
Only run this step if a Tambora suite name was provided and --qa is NOT present.
mcp__tambora__check_connectivity. If it returns reachable: false, skip this step and note that Tambora is unavailable.mcp__tambora__list_test_cases with the suite name extracted from the arguments (and module if identifiable from context).After presenting the report, ask the user:
"Would you like me to post any of these findings as a comment on a Jira ticket? If so, which ticket(s)?"
If the user says yes:
cat <<'EOF' | jira issue comment add <TICKET-ID> --template -\n<comment body>\nEOFThe user may want to post different parts of the report to different tickets (e.g., quality issues to the parent ticket, gap analysis to a subtask, Tambora coverage to the BE ticket). Support this by asking which sections to include for each ticket.
Only run this step if --qa was present in the arguments AND a Tambora suite name was provided.
This step walks QA through recording execution results for each test case, one at a time.
Call mcp__tambora__check_connectivity. If unreachable, abort and inform the user.
Ask the user:
"Do you have an existing test run code to reuse (e.g. TR-MPP-14)? If not, I'll create a new one."
test_run_code for the rest of this step.mcp__tambora__create_test_run_from_suite with the module and suite name extracted from arguments. Use the returned test_run_code going forward. Confirm to the user: "Created test run [code]. Let's record results."For each test case from the suite (use the list already fetched in Step 6), ask the user one at a time:
"[TC-MPP-XXXX] — [Title] Status? (passed / failed / skipped / broken) — or press Enter to skip"
failed or broken, also ask: "Any error message to record? (optional)"After going through all test cases, show a summary of the collected results and ask:
"Ready to submit these results to Tambora? (yes / no)"
If confirmed → call mcp__tambora__add_test_run_results with all collected results in one batch.
Ask the user:
"Mark this test run as completed? (yes / no)"
If yes → call mcp__tambora__complete_test_run with the test_run_code.
Confirm the final state to the user (run code, how many accepted/rejected, completion status).