Generate a test plan from a strategy (RHAISTRAT), with optional ADR for extra technical depth
Generate a complete test plan for a RHOAI feature based on a refined strategy, and optionally an ADR document for additional technical depth.
/test-plan.create <RHAISTRAT_KEY> [ADR_FILE_PATH]
Examples:
/test-plan.create RHAISTRAT-400/test-plan.create RHAISTRAT-400 /path/to/adr.pdfParse $ARGUMENTS to extract:
RHAISTRAT-400)If no arguments are provided and a strategy file was just generated by /strat.create or /strat.refine in this session, use it automatically — proceed directly to Step 1.
If no arguments are provided and no strategy file is available from the current session, ask the user for:
RHAISTRAT-400)mcp_catalog). If not provided, derive from the feature name.Run uv run python scripts/frontmatter.py schema test-plan via Bash. If it fails with a PyYAML import error, ask the user to install dependencies:
uv pip install -r requirements.txt
Do NOT proceed until this succeeds.
Verify that the Atlassian MCP integration is available by attempting to use the mcp__atlassian__getJiraIssue tool.
If the MCP tool is not available:
artifacts/strat-tasks/ as an alternativeIf the MCP tool is available, proceed to Step 1.
mcp__atlassian__getJiraIssue. The strategy contains both the technical approach (HOW) and the business need (WHAT/WHY). If auto-detected, read the local file from artifacts/strat-tasks/.Invoke these three forked analyzer skills in parallel using the Skill tool. Each runs in its own isolated context with the strategy and ADR content.
Pass the full strategy content (and ADR content if available) inline in the skill arguments so each sub-agent has the source material.
test-plan.analyze.endpoints: Extracts feature scope (in-scope, out-of-scope, test objectives) and identifies API endpoints/methods under test. Produces findings for Sections 1 and 4.test-plan.analyze.risks: Determines test levels, test types, priority definitions, risks with mitigations, and non-functional requirement assessments. Produces findings for Sections 2, 7, and 8.test-plan.analyze.infra: Identifies test environment configuration, test data, test users, infrastructure, and tooling requirements. Produces findings for Sections 3 and 9.Once all three sub-agents return:
## Gaps sections for Step 3.5mkdir -p <feature_name>/test_cases${CLAUDE_SKILL_DIR}/test-plan-template.md using the Read tool<feature_name>/TestPlan.md by filling in the template with the gathered information. Follow the template structure exactly — do not add, remove, or reorder sections. Do NOT write frontmatter manually — Step 3.1 handles it.<feature_name>/README.md with:
After generating TestPlan.md, set its frontmatter using the frontmatter.py script via Bash. This validates the metadata against the schema before writing.
uv run python scripts/frontmatter.py set <feature_name>/TestPlan.md \
feature="<feature_name>" \
strat_key=<RHAISTRAT_KEY> \
version=1.0.0 \
status=Draft \
author="<team_name>" \
additional_docs="<comma-separated list of doc links, or []>"
additional_docs: include ADR link and any other document links provided by the user. Use [] if none.last_updated is auto-set to today's date by the script.reviewers defaults to [].If the script exits with an error, fix the field values and retry — do not write frontmatter by hand.
After generating the test plan, collect all gaps reported by the three sub-agents from Step 2.
Extract gaps from each sub-agent's ## Gaps output section
Write <feature_name>/TestPlanGaps.md with all gaps organized by source sub-agent:
# Gaps — <Feature Name>
## Scope & Endpoints
{gaps from test-plan.analyze.endpoints, or "No gaps identified."}
## Test Strategy & Risks
{gaps from test-plan.analyze.risks, or "No gaps identified."}
## Environment & Infrastructure
{gaps from test-plan.analyze.infra, or "No gaps identified."}
Set frontmatter on TestPlanGaps.md using the frontmatter.py script:
uv run python scripts/frontmatter.py set <feature_name>/TestPlanGaps.md \
feature="<feature_name>" \
strat_key=<RHAISTRAT_KEY> \
status=Open \
gap_count=<number_of_gaps>
gap_count: total number of individual gaps across all three sectionsstatus=Resolved and gap_count=0last_updated is auto-set by the scriptIf gaps exist, present the user with a structured action menu via AskUserQuestion. List the gaps first, then offer numbered options. Example:
The following gaps were identified in the test plan:
- Endpoint paths for the catalog API are not specified — an API spec or ADR would resolve this
- RBAC roles are unclear — a feature refinement doc would help
- KServe CSI configuration details are missing — a design doc would resolve this
What would you like to do?
- Provide documents — paste file paths (ADR, API spec, design doc) to resolve gaps
- Proceed to review — continue with the test plan as-is (gaps will be noted in TestPlanGaps.md)
- Proceed to review + generate test cases — continue and automatically run
/test-plan.create-casesafter review
If the user chooses option 1: Read the provided documents, re-run only the relevant sub-agents from Step 2 with the new material, update the test plan, update TestPlanGaps.md (removing resolved gaps, adding any new ones), update the gaps frontmatter (gap_count, status), then present the menu again with remaining gaps (if any).
If the user chooses option 2 or no gaps exist: Proceed to Step 4.
If the user chooses option 3: Proceed to Step 4, and after the review is complete, automatically invoke /test-plan.create-cases with the feature directory.
After the gaps flow is complete, invoke the internal test-plan.review skill with the feature directory.
The reviewer runs the quality rubric (5 criteria, 0-2 each, 10-point scale):
The reviewer handles auto-revision internally (up to 2 cycles) and writes <feature_name>/TestPlanReview.md with scores and feedback.
Handle the review output:
<feature_name>/TestPlanReview.md frontmatterTestPlanGaps.md/test-plan.resolve-feedback <PR_URL> after publishing)/test-plan.create-casestest-plan.analyze.risks — each category must be addressed or marked Not Applicable/test-plan.create-cases/coverage-assessment$ARGUMENTS