Run targeted test commands and summarize results.
Use this skill for the mechanics of selecting, running, and reporting targeted validation commands.
Before running anything:
Search shared memory for the project's test command:
$TAKT_CMD memory search "test command" --namespace global --limit 3
If a prior agent has recorded the command, use it directly.
If not in memory, detect from the project root (at repo/):
repo/pyproject.toml present → likely uv run pytest or uv run python -m unittestrepo/package.json present → likely npm test or npx jestrepo/Cargo.toml present → likely cargo testrepo/go.mod present → likely go test ./...repo/pom.xml or repo/build.gradle present → likely or mvn test./gradlew testrepo/README.md or repo/.takt/config.yaml (common.test_command) for the authoritative command.Once you have determined the command, write it to shared memory so future agents don't need to detect it:
$TAKT_CMD memory add "Test command for this project: <command>" \
--namespace global --source tester
For targeted runs, most frameworks accept a file path or module argument:
<test-command> <path/to/test_file>. Use this form when the framework supports it.
Fall back to the full command only when targeted runs are not supported.
Identify the production files and behaviors touched by the bead, and choose the narrowest targeted invocation that directly exercises that behavior. Escalate to multiple specific targets only when one is insufficient to cover the changed surface area.
Summaries should be concrete and reproducible:
Return a blocked or follow-up-ready result when:
The goal is not maximum test volume. The goal is the minimum trustworthy automated evidence for the assigned bead.