Shared methodology for running test suites (Cypress, Playwright, Robot Framework, k6) with reliable output capture via log files and HTML reports. Use this skill when executing any test framework.
This skill defines the reliable workflow for running and analyzing tests across all test frameworks. The core problem it solves: terminal output is often truncated or the process gets interrupted, so we always run as background processes, capture output to files, and use HTML reports via browser.
> logfile 2>&1 redirection, set isBackground: true in terminal tool| tee — it requires a foreground terminal which can be interrupted or timeouttest-logs/ directory at the workspace root before any test runBefore running any test suite, auto-discover the project layout by searching for config files:
cypress.config.js or cypress.config.ts — read baseUrl from itplaywright.config.ts or playwright.config.js — read baseURL from it*.robot files and requirements.txt — find the URL variable in .resource files*.js files with import http from 'k6/http' — find the base URL in the scriptspom.xml — standard Maven test structureAlso discover test users and credentials from:
Every test run follows this 3-phase pattern:
# Foreground terminal: create log dir
mkdir -p test-logs
# Background terminal (isBackground: true): run tests
cd <workspace> && <test command> > test-logs/<framework>-run.log 2>&1
# Check if the run has finished by looking for completion markers
while ! grep -q '<MARKER>' test-logs/<framework>-run.log 2>/dev/null; do sleep 15; done && echo 'DONE'
Completion markers by framework:
(Run Finished)passed or failed in summary line^Output:iteration_durationBUILD SUCCESS or BUILD FAILURE# Use the file read tool to read test-logs/<framework>-run.log
Before running any test suite, verify in a foreground terminal:
# 1. Check the app is reachable (discover the URL from config files first)
curl -s -o /dev/null -w "%{http_code}" <baseUrl>
# 2. Create test-logs directory
mkdir -p test-logs
If the app is not reachable, inform the user and stop.
# BACKGROUND: Run all
cd <workspace>/<cypress-dir> && npx cypress run > ../test-logs/cypress-run.log 2>&1
# BACKGROUND: Run one spec
cd <workspace>/<cypress-dir> && npx cypress run --spec "cypress/e2e/<spec>.cy.js" > ../test-logs/cypress-run.log 2>&1
# POLL: Wait for completion
while ! grep -q '(Run Finished)' test-logs/cypress-run.log 2>/dev/null; do sleep 15; done && echo 'CYPRESS DONE'
# Artifacts: screenshots/, videos/, test-logs/cypress-run.log
# BACKGROUND: Run all
cd <workspace>/<playwright-dir> && npx playwright test > ../test-logs/playwright-run.log 2>&1
# BACKGROUND: Run one spec
cd <workspace>/<playwright-dir> && npx playwright test tests/<spec>.spec.ts > ../test-logs/playwright-run.log 2>&1
# POLL: Wait for completion
while ! grep -qE '\d+ passed|\d+ failed' test-logs/playwright-run.log 2>/dev/null; do sleep 15; done && echo 'PLAYWRIGHT DONE'
# View HTML report (BEST for debugging)
cd <playwright-dir> && npx playwright show-report --port 9323 &
# Artifacts: playwright-report/, test-results/, test-logs/playwright-run.log
# BACKGROUND: Run all
cd <workspace>/<robot-dir> && source .venv/bin/activate && robot --outputdir results tests/ > ../test-logs/robot-run.log 2>&1
# BACKGROUND: Run one file
cd <workspace>/<robot-dir> && source .venv/bin/activate && robot --outputdir results tests/<file>.robot > ../test-logs/robot-run.log 2>&1
# POLL: Wait for completion
while ! grep -q '^Output:' test-logs/robot-run.log 2>/dev/null; do sleep 15; done && echo 'ROBOT DONE'
# View HTML report (BEST for debugging)
cd <robot-dir>/results && python3 -m http.server 9324 &
# Artifacts: results/report.html, results/log.html, test-logs/robot-run.log
# BACKGROUND: Run a test
cd <workspace>/<k6-dir> && k6 run <script>.js --out json=../test-logs/k6-<name>-results.json > ../test-logs/k6-<name>-run.log 2>&1
# POLL: Wait for completion
while ! grep -q 'iteration_duration' test-logs/k6-<name>-run.log 2>/dev/null; do sleep 15; done && echo 'K6 DONE'
# Artifacts: test-logs/k6-<name>-run.log, test-logs/k6-<name>-results.json
# BACKGROUND: Run all unit tests
cd <workspace> && mvn test -B > test-logs/junit-run.log 2>&1
# BACKGROUND: Run specific test class
cd <workspace> && mvn test -Dtest=<TestClass> -B > test-logs/junit-run.log 2>&1
# POLL: Wait for completion
while ! grep -qE 'BUILD SUCCESS|BUILD FAILURE' test-logs/junit-run.log 2>/dev/null; do sleep 10; done && echo 'JUNIT DONE'
# Artifacts: test-logs/junit-run.log, target/surefire-reports/*.txt
After every test run, follow this sequence:
Use the file read tool to open the appropriate log file (e.g., test-logs/cypress-run.log). Extract:
For Playwright, Robot Framework: serve the report directory and open it in a browser using browser tools. These reports are far more informative than log files.
view_image tool to see failure screenshotsnpx playwright show-traceAlways format results as a clear summary table with:
Launch each as a background process. Wait for each to complete before starting the next (they share the same app and may conflict):
Never start the next suite until the current one's poll confirms completion.