Generate the final technical assessment report as Excel (.xlsx) and Word (.docx) deliverables. Files are produced server-side, attached to the assessment, and visible in the Reports panel of the assessment page.
Generate deliverable files via the server-side report tool. Files are persisted on the VM, registered in the database, and exposed via the Reports panel on the assessment detail page.
$ARGUMENTS.get_assessment_context(assessment_id) — confirm the assessment exists, capture target app + pipeline stage. Sanity-check the pipeline stage: report generation should run after recommendations.Call:
generate_assessment_report(assessment_id=<id>, formats=["xlsx", "docx"])
Optional: pass generated_by to record what triggered it (e.g., "report skill via Claude Desktop").
The tool builds both files server-side, persists rows, and returns a list with download URLs. Everything runs on the VM.
AssessmentReportassessment_{number}_{timestamp}.xlsx)assessment_{number}_{timestamp}.docx)Tell the user:
✅ Reports generated. View them on the assessment page: https://136-112-232-229.nip.io/assessments/{assessment_id}
Or download directly:
(Substitute the actual IDs from the tool response.)
Calling generate_assessment_report again creates new report files with a fresh timestamp. The old reports are kept (visible in the Reports panel as history) until manually cleaned up. This is intentional — you can always go back to a prior version.
Reports are derived from the artifact/feature observations and recommendations already in the DB. If those need refinement, do that in the appropriate stage (observations / recommendations) and re-generate the report — don't try to "fix" the report by editing the file directly.
When the user is happy with the report, advance the pipeline to mark the assessment complete:
curl -s -X POST https://136-112-232-229.nip.io/api/assessments/${ASSESSMENT_ID}/advance-pipeline \
-H "Content-Type: application/json" \
-d '{"target_stage": "complete", "force": true}'