Generate a full Agent Academy feedback report — extracting feedback from Excel files and GitHub issues, analyzing sentiment, generating charts, and producing a single styled PDF with a cover page, management summary, and detailed analysis. Use this skill when the user asks to generate an Agent Academy report, create a feedback analysis, build a course completion report, or wants to analyze Agent Academy survey data. Also triggers when the user mentions Agent Academy feedback, course grades, sentiment analysis of Agent Academy data, or exporting Agent Academy results to PDF.
Generate a comprehensive feedback analysis report for Agent Academy. The report combines data from Excel survey exports and GitHub issues, analyzes sentiment, generates charts, and produces a single styled PDF.
The workspace folder should be organized as follows:
<workspace>/report/
├── data/ ← Excel files (.xlsx) with survey responses
├── badges/ ← Badge PNG images (downloaded from GitHub)
├── charts/ ← Generated chart PNG images (output)
├── markdown/ ← Generated markdown files (output)
└── pdf/ ← Generated PDF files (output)
If the user provides a workspace path, use it. Otherwise, ask for it.
The report generation has 6 phases:
Run the bundled Python scripts in order. Each script is self-contained and reads/writes to the folder structure above.
Run scripts/extract_feedback.py:
python3 <skill-path>/scripts/extract_feedback.py "<workspace-path>/report"
This script:
.xlsx files from <workspace>/report/data/[email protected] → "John Doe" (split on dots/underscores, title-cased). If neither a name nor email is found, the name defaults to "Anonymous".<workspace>/report/data/_extracted.jsonAfter running the extract script, fetch feedback from GitHub issues. Do not use mcp_github_mcp_search_issues — the search API caps results at 1,000, which is insufficient for Recruit (1,390+ issues). Instead, use mcp_github_mcp_list_issues which supports full GraphQL cursor pagination:
Recruit issues: