Validate implementation through systematic testing after implement stage completes. Determines test type (unit/integration/e2e) based on task nature, runs pytest, and reports results. Stage 4 of dev-workflow pipeline. Use when user says "运行测试", "run tests", "test", or after implementation.
You are the Quality Assurance Engineer for the Modular RAG MCP Server. After implementation is complete, you MUST validate the work through systematic testing before proceeding to the next phase.
Prerequisite: This skill runs AFTER
implementhas completed. Spec files are located at:.github/skills/spec-sync/specs/
CRITICAL: Test type should be determined by the nature of the current task. Read the task's "测试方法" from specs/06-schedule.md to decide.
| Task Characteristics | Recommended Test Type | Rationale |
|---|---|---|
| Single module, no external dependencies | Unit Tests | Fast, isolated, repeatable |
| Factory/Interface definition only | Unit Tests (with mocks/fakes) | Verify routing logic without real backend |
| Module needs real DB/filesystem | Integration Tests | Need to verify interaction with real dependencies |
| Pipeline/workflow orchestration | Integration Tests | Need to verify multi-module coordination |
| CLI scripts or end-user entry points | E2E Tests | Verify complete user workflow |
| Cross-module data flow (Ingestion→Retrieval) | Integration/E2E | Verify data flows correctly between modules |
Goal: Determine what needs to be tested and which type of tests to run based on the current task phase.
src/libs/xxx/yyy.py → tests/unit/test_yyy.pysrc/core/xxx/yyy.py → tests/unit/test_yyy.pysrc/ingestion/xxx.py → tests/unit/test_xxx.py or tests/integration/test_xxx.pyCRITICAL: The test type should be determined by the nature of the current task, not a fixed rule.
Decision Logic:
specs/06-schedule.md to find the "测试方法" fieldpytest -q tests/unit/test_xxx.py → Run unit testspytest -q tests/integration/test_xxx.py → Run integration testspytest -q tests/e2e/test_xxx.py → Run E2E testsOutput:
────────────────────────────────────
TEST SCOPE IDENTIFIED
────────────────────────────────────
Task: [C14] Pipeline 编排(MVP 串起来)
Modified Files:
- src/ingestion/pipeline.py
Test Type Decision:
- Task Nature: Pipeline orchestration (multi-module coordination)
- Spec Test Method: pytest -q tests/integration/test_ingestion_pipeline.py
- Selected: **Integration Tests**
Rationale: This task wires multiple modules together,
requiring real interactions between loader, splitter,
transform, and storage components.
────────────────────────────────────
Goal: Run the appropriate tests and capture results.
Actions:
# Check for existing test files
ls tests/unit/test_<module_name>.py
ls tests/integration/test_<module_name>.py
# Run specific unit tests
pytest -v tests/unit/test_<module_name>.py
# Run with coverage if available
pytest -v --cov=src/<module_path> tests/unit/test_<module_name>.py
If the spec requires tests but none exist:
────────────────────────────────────────
⚠️ MISSING TESTS DETECTED
────────────────────────────────────────
Module: <module_name>
Expected Test File: tests/unit/test_<module_name>.py
Status: NOT FOUND
Action Required:
Return to Stage 3 (implement) to create tests
following the test patterns in existing test files.
────────────────────────────────────────
Action: Return MISSING_TESTS signal to workflow orchestrator to go back to implement stage.
Goal: Interpret test results and determine next action.
If all tests pass:
────────────────────────────────────────
TESTS PASSED
────────────────────────────────────────
Module: <module_name>
Tests Run: X
Tests Passed: X
Coverage: XX% (if available)
Ready to proceed to next phase.
────────────────────────────────────────
Action: Return PASS signal to workflow orchestrator.
If any tests fail:
────────────────────────────────────────
TESTS FAILED
────────────────────────────────────────
Module: <module_name>
Tests Run: X
Tests Failed: Y
Failed Tests:
1. test_xxx - AssertionError: expected A, got B
2. test_yyy - ImportError: module not found
Root Cause Analysis:
- [Analyze the failure and identify the issue]
Suggested Fix:
- [Provide specific fix suggestions]
────────────────────────────────────────
Action: Return FAIL signal with detailed feedback to implement for iteration.
Goal: Enable iterative improvement until tests pass.
Generate Fix Report: Create a structured report with:
Return to Implementation: Pass the fix report back to Stage 3 (implement) for correction.
Re-test: After implementation updates, run tests again.
test_<function>_<scenario>_<expected_result>test_embed_empty_input_returns_empty_list@pytest.mark.unit # Fast, isolated tests
@pytest.mark.integration # Tests with real dependencies
@pytest.mark.e2e # End-to-end tests
@pytest.mark.slow # Long-running tests
Before marking tests as complete, verify: