Use when working with Testmo — testmo test management platform monitoring and analysis. Covers project organization, test case and suite management, automation run tracking, exploratory session review, milestone progress, and field configuration. Use when managing test cases in Testmo, reviewing automated and manual test results, or tracking QA progress.
Manage and analyze Testmo projects, test suites, automation runs, and results.
#!/bin/bash
# Testmo API helper
testmo_api() {
local method="${1:-GET}"
local endpoint="$2"
local data="${3:-}"
if [ -n "$data" ]; then
curl -s -X "$method" \
-H "Authorization: Bearer ${TESTMO_API_TOKEN}" \
-H "Content-Type: application/json" \
"${TESTMO_URL}/api/v1/${endpoint}" \
-d "$data"
else
curl -s -X "$method" \
-H "Authorization: Bearer ${TESTMO_API_TOKEN}" \
"${TESTMO_URL}/api/v1/${endpoint}"
fi
}
Always discover projects and resources before querying specifics.
#!/bin/bash
echo "=== Testmo Projects ==="
testmo_api GET "projects" | jq -r '
.result[] | "\(.id)\t\(.name)\tstatus=\(.status)"
' | column -t | head -15
echo ""
echo "=== Test Suites ==="
PROJECT_ID="${1:?Project ID required}"
testmo_api GET "projects/${PROJECT_ID}/suites" | jq -r '
.result[] | "\(.id)\t\(.name)\tcases=\(.case_count // 0)"
' | column -t | head -15
echo ""
echo "=== Recent Automation Runs ==="
testmo_api GET "projects/${PROJECT_ID}/automation/runs?limit=10" | jq -r '
.result[] | "\(.id)\t\(.name)\tstatus=\(.status_text)\tpassed=\(.passed_count)\tfailed=\(.failed_count)"
' | column -t
echo ""
echo "=== Milestones ==="
testmo_api GET "projects/${PROJECT_ID}/milestones" | jq -r '
.result[] | "\(.id)\t\(.name)\tstatus=\(.status)\tprogress=\(.progress // 0)%"
' | column -t | head -10
#!/bin/bash
PROJECT_ID="${1:?Project ID required}"
echo "=== Test Case Statistics ==="
testmo_api GET "projects/${PROJECT_ID}/suites" | jq '{
total_suites: (.result | length),
total_cases: [.result[].case_count // 0] | add
}'
echo ""
echo "=== Automation Run Details ==="
RUN_ID="${2:-}"
if [ -n "$RUN_ID" ]; then
testmo_api GET "projects/${PROJECT_ID}/automation/runs/${RUN_ID}" | jq '{
name: .result.name,
status: .result.status_text,
passed: .result.passed_count,
failed: .result.failed_count,
elapsed: .result.elapsed,
source: .result.source
}'
echo ""
echo "=== Failed Tests ==="
testmo_api GET "projects/${PROJECT_ID}/automation/runs/${RUN_ID}/tests?status=failed&limit=20" | jq -r '
.result[] | "\(.name)\t\(.status_text)\t\(.elapsed // 0)s"
' | column -t | head -15
fi
echo ""
echo "=== Exploratory Sessions ==="
testmo_api GET "projects/${PROJECT_ID}/sessions?limit=5" | jq -r '
.result[] | "\(.id)\t\(.name)\tissues=\(.issue_count // 0)\tnotes=\(.note_count // 0)"
' | column -t
Present results as a structured report:
Managing Testmo Report
══════════════════════
Resources discovered: [count]
Resource Status Key Metric Issues
──────────────────────────────────────────────
[name] [ok/warn] [value] [findings]
Summary: [total] resources | [ok] healthy | [warn] warnings | [crit] critical
Action Items: [list of prioritized findings]
Target ≤50 lines of output. Use tables for multi-resource comparisons.
| Shortcut | Counter | Why |
|---|---|---|
| "I'll skip discovery and check known resources" | Always run Phase 1 discovery first | Resource names change, new resources appear — assumed names cause errors |
| "The user only asked for a quick check" | Follow the full discovery → analysis flow | Quick checks miss critical issues; structured analysis catches silent failures |
| "Default configuration is probably fine" | Audit configuration explicitly | Defaults often leave logging, security, and optimization features disabled |
| "Metrics aren't needed for this" | Always check relevant metrics when available | API/CLI responses show current state; metrics reveal trends and intermittent issues |
| "I don't have access to that" | Try the command and report the actual error | Assumed permission failures prevent useful investigation; actual errors are informative |