Interactive QA session where user reports bugs or issues conversationally, and the agent files GitHub or Jira issues. Explores the codebase in the background for context and domain language. Use when user wants to report bugs, do QA, file issues conversationally, or mentions "QA session".
Run an interactive QA session. The user describes problems they're encountering. You clarify, explore the codebase for context, and file issues (GitHub or Jira) that are durable, user-focused, and use the project's domain language.
Let the user describe the problem in their own words. Ask at most 2-3 short clarifying questions focused on:
Do NOT over-interview. If the description is clear enough to file, move on.
While talking to the user, kick off an Agent (subagent_type=Explore) in the background to understand the relevant area. The goal is NOT to find a fix — it's to:
This context helps you write a better issue — but the issue itself should NOT reference specific files, line numbers, or internal implementation details.
On the first issue of the session, ask the user: "Should I file issues in GitHub or Jira?"
If GitHub:
gh auth status to verify authentication. If not authenticated, ask the user to run ! gh auth login.gh repo view --json nameWithOwner -q .nameWithOwner to identify the current repository. Show the user the repo name and ask: "I'll file issues in <owner/repo>. Is that correct?" Wait for confirmation before proceeding.If Jira:
getVisibleJiraProjects to verify connectivity and list available projects. If the connection fails, tell the user their Jira/Atlassian integration is not authenticated and ask them to set it up.Remember the user's choice for the remainder of the session — do not ask again for subsequent issues.
Before filing, decide whether this is a single issue or needs to be broken down into multiple issues.
Break down when:
Keep as a single issue when:
Do NOT ask the user to review before filing — just file and share URLs/keys.
Issues must be durable — they should still make sense after major refactors. Write from the user's perspective.
If GitHub: Create issues with gh issue create. Add a bug label (check available labels with gh label list first; create with gh label create if needed).
If Jira: Create issues using the Atlassian MCP tool createJiraIssue with the selected project. Use issue type "Bug". Use additional_fields to populate:
"High" for broken core functionality, "Medium" for degraded behavior, "Low" for cosmetic issues. Use {"priority": {"name": "<level>"}}.["bug", "qa-session"]).Use this template:
## What happened
[Describe the actual behavior the user experienced, in plain language]
## What I expected
[Describe the expected behavior]
## Steps to reproduce
1. [Concrete, numbered steps a developer can follow]
2. [Use domain terms from the codebase, not internal module names]
3. [Include relevant inputs, flags, or configuration]
## Additional context
[Any extra observations from the user or from codebase exploration that help frame the issue — e.g. "this only happens when using the Docker layer, not the filesystem layer" — use domain language but don't cite files]
Create issues in dependency order (blockers first) so you can reference real issue numbers/keys.
If GitHub: Use #<issue-number> to reference other issues.
If Jira: Use the Jira issue key (e.g., PROJ-123) to reference other issues. Use createIssueLink to create blocking relationships between issues.
Use this template for each sub-issue:
## Parent issue
<issue-reference> (if you created a tracking issue) or "Reported during QA session"
## What's wrong
[Describe this specific behavior problem — just this slice, not the whole report]
## What I expected
[Expected behavior for this specific slice]
## Steps to reproduce
1. [Steps specific to THIS issue]
## Blocked by
- <issue-reference> (if this issue can't be fixed until another is resolved)
Or "None — can start immediately" if no blockers.
## Additional context
[Any extra observations relevant to this slice]
When creating a breakdown:
After filing, print all issue URLs/keys (with blocking relationships summarized) and ask: "Next issue, or are we done?"
Keep going until the user says they're done. Each issue is independent — don't batch them.