Build, update, modernize, and QA technical training courses (slides, labs, code, devcontainer setup). Use whenever the user mentions training, course, labs, slides, workshop, hands-on exercises, lab exercises, course materials, training deck, student guide, or updating/creating educational technical content. Also trigger on references to labs.md, course repo structure (images/, extra/, skeleton code), or generating requirements.txt, devcontainer configs, setup scripts. Covers: creating new labs from scratch, updating existing labs/slides for newer library versions, QA review of training materials, generating supporting files, and full course repo creation. For TechUpSkills / Brent Laster AI/ML training courses.
You are helping Brent Laster (TechUpSkills / Tech Skills Transformations) create, update, modernize, and QA technical training courses. Brent teaches enterprise AI/ML engineering workshops with hands-on labs run in VS Code devcontainers or GitHub Codespaces.
Determine the task type. Read the user's request and figure out which workflow applies:
Read the relevant reference files in this skill's references/ directory:
references/lab-format.md - Lab writing conventions and format rulesreferences/repo-structure.md - Standard course repository layoutreferences/qa-checklist.md - QA review checklist for course materialsreferences/slide-conventions.md - Slide creation and formatting guidelinesIf working with an existing course, read its labs.md, README.md, and browse the repo structure first
to understand the current state before making changes.
If creating or updating slides, also read the pptx skill (/sessions/keen-determined-keller/mnt/.skills/skills/pptx/SKILL.md)
since you'll need its techniques for PowerPoint manipulation.
Every course follows this standard layout. When creating a new course, generate all of these components. When updating, preserve this structure.
[course-name]/
├── .devcontainer/
│ └── devcontainer.json # Dev container config (VS Code + Docker)
├── .github/
│ └── copilot-instructions.md # AI assistant instructions for students
├── images/ # Screenshots referenced in labs.md
├── extra/ # Completed code versions for diff-merge labs
├── scripts/
│ ├── pysetup.sh # Python environment setup
│ ├── startup_ollama.sh # Service startup (if needed)
│ └── startOllama.sh # Service re-attach
├── [topic-dirs]/ # Code organized by topic (e.g., llm/, rag/, neo4j/)
│ └── *.py # Lab code files
├── tools/ # Utility scripts (indexers, search tools)
├── labs.md # THE main lab document students follow
├── README.md # Setup instructions + prerequisites
├── README-Codespace.md # Alternative codespace setup (if applicable)
├── requirements.txt # Python dependencies
├── LICENSE # License file
└── .gitignore # Standard Python + IDE ignores
See references/repo-structure.md for full details on each component.
Labs are the heart of the training. They live in a single labs.md file that students follow step-by-step.
extra/ that they can diff-merge or reference.Read references/lab-format.md for the complete format specification. Key points:
<br><br> between steps for spacing<p align="center">**[END OF LAB]**</p></br></br>./images/ae-new-1.png)After writing or updating labs.md, perform this self-verification to catch structural issues. This methodology comes from live automated testing where missed steps caused cascading failures.
Step 1: Extract every command from labs.md:
grep -n '^\(```\|code \|python \|cd \|pip \|npm \|cat \|cp \|kill \|curl \)' labs.md
Also search specifically for code -d merge commands — these are the most commonly missed:
grep -n 'code -d' labs.md
Step 2: Map every command to its lab and step. Build a structured execution plan:
Lab 2:
Step 1 (line 190): cd agents [CD]
Step 2 (line 205): code supervisor_budget_agent.py [VIEW]
Step 3 (line 211): code -d ../extra/supervisor_budget_agent.txt ... [MERGE]
Step 4 (line 229): python supervisor_budget_agent.py [RUN]
Step 3: Classify each step by type:
[MERGE] — code -d commands that copy complete code over skeleton files[RUN] — python, node, bash, etc. commands that execute programs[VIEW] — code <file> commands (just viewing/verifying file exists)[CD] — directory changes[SETUP] — pip install, npm install, environment setup[INPUT] — steps where students type into an interactive prompt[INFO] — discussion/observation steps, no action needed[OPTIONAL] — steps marked optionalStep 4: Cross-reference dependencies:
[RUN] step has its prerequisite [MERGE] step listed before it. This is the
single most common lab failure: a student (or tester) runs a skeleton file without first merging
in the complete code. Never place a [RUN] step between a [VIEW] step and its [MERGE] step.cd leaves students in the right directory for subsequent commandscd'd into yetStep 5: Check timing and pacing:
Slides accompany the labs and provide the conceptual framework. Use the pptx skill for the actual
PowerPoint file manipulation.
See references/slide-conventions.md for detailed formatting guidelines.
DO NOT use python-pptx to create new slides. The prs.slide_layouts[N] API often selects
the wrong slide master, causing new slides to have completely different backgrounds, colors, and
fonts from existing content slides. Decks with multiple slide masters (common when imported from
Google Slides) are particularly prone to this.
Instead, use the add_slide.py duplicate-and-edit workflow:
python scripts/add_slide.py <deck_unpacked>/ <source_slide> <new_slide>rId3)presentation.xml at the correct positionpython-pptx IS safe for:
python-pptx is NOT safe for:
add_slide.pyBackground image awareness: Many decks use full-slide background PNGs referenced as rId3.
When editing slide XML, always preserve the background <p:pic> element and place content shapes
on top of it.
[Update - YYYY-MM-DD] Changed: <description>When updating a course for newer library/API versions:
requirements.txt and identify which packages need version bumps.
Check for breaking changes in each updated package.© 2025 → © 2026, version dates, "Workshop - Month Year"When modifying existing materials, maintain a record of what changed so the instructor can review and revert if needed:
## Revision [X.Y] - [MM/DD/YY] header line with the new version and date# Updated [DATE]: Migrated from LangChain 0.1.x to 0.3.x API[Update - YYYY-MM-DD] Changed: <short description of what was fixed>[BACKUP - Update YYYY-MM-DD] Original version of slide [N]: [slide title]When reviewing existing materials, use the checklist in references/qa-checklist.md. Key areas:
Standard pattern for AI/ML training courses:
{
"image": "mcr.microsoft.com/devcontainers/base:bookworm",
"remoteEnv": {
"OLLAMA_MODEL": "llama3.2:3b"
},
"hostRequirements": {
"cpus": 4,
"memory": "16gb",
"storage": "32gb"
},
"features": {
"ghcr.io/devcontainers/features/docker-from-docker:1": {},
"ghcr.io/devcontainers/features/github-cli:1": {},
"ghcr.io/devcontainers/features/python:1": {}
},
"customizations": {
"vscode": {
"settings": {
"python.terminal.activateEnvInCurrentTerminal": true,
"python.defaultInterpreterPath": ".venv/bin/python",
"github.copilot.enable": { "*": false },
"github.copilot.enableAutoComplete": false,
"editor.inlineSuggest.enabled": false,
"workbench.startupEditor": "readme",
"workbench.editorAssociations": { "*.md": "vscode.markdown.preview.editor" },
"terminal.integrated.defaultProfile.linux": "bash",
"terminal.integrated.profiles.linux": {
"bash": { "path": "bash", "args": ["-l"] }
}
},
"extensions": ["mathematic.vscode-pdf", "vstirbu.vscode-mermaid-preview"]
}
},
"postCreateCommand": "bash -i scripts/pysetup.sh py_env && bash -i scripts/startup_ollama.sh",
"postAttachCommand": "bash scripts/startOllama.sh"
}
Adjust remoteEnv, features, postCreateCommand, and extensions based on course needs.
Copilot is intentionally disabled in training environments so students learn hands-on.
Standard Python environment setup script:
#!/usr/bin/env bash
PYTHON_ENV=$1
python3 -m venv ./$PYTHON_ENV \
&& export PATH=./$PYTHON_ENV/bin:$PATH \
&& grep -qxF "source $(pwd)/$PYTHON_ENV/bin/activate" ~/.bashrc \
|| echo "source $(pwd)/$PYTHON_ENV/bin/activate" >> ~/.bashrc
source ./$PYTHON_ENV/bin/activate
if [ -f "./requirements.txt" ]; then
pip3 install -r "./requirements.txt"
elif [ -f "./requirements/requirements.txt" ]; then
pip3 install -r "./requirements/requirements.txt"
fi
Group dependencies by purpose with comments explaining each group. Pin minimum versions (>=) rather than exact versions for flexibility.
When creating skeleton + completed code pairs:
rag/lab10.py)extra/ (e.g., extra/lab10_eval_complete.txt)
.txt extension for completed versions so they don't execute accidentallycode -d to open a diff view between the two filesSkeleton File Rules (from live testing lessons):
These rules prevent the most common lab failures found during automated QA testing:
Skeleton files MUST be syntactically valid Python — they should parse without syntax errors
even if they can't run successfully. No bare indentation errors, unclosed brackets, or missing
colons. Students who accidentally try to run the skeleton before merging should get a clean
runtime error (like a NotImplementedError or missing function), not a confusing SyntaxError.
Use clear # TODO or placeholder comments where code will be merged in. This makes the
diff view obvious and helps students understand what needs to change.
Never place a "run this file" step between a "view" step and its merge step. The correct order is always: view skeleton → diff/merge → run. If students are asked to run a file, the merge MUST have happened first. This is the single most common mistake in lab authoring.
The diff between skeleton and complete should be clean — only the intentional gaps should differ. No unrelated whitespace changes, import reordering, or comment differences. A clean diff makes the merge obvious for students.
Explicitly tell students the file won't run yet if a skeleton is shown before its merge step. A simple note like "Note: this file is incomplete — we'll merge in the working code in the next step" prevents confusion.
Test the diff by running diff skeleton.py ../extra/complete.txt to verify it shows only
the intended changes. Unexpected differences confuse students during the merge process.
Some labs require multiple concurrent processes (e.g., a server in one terminal and a client in another, or an auth server + application server + client). These labs need careful structuring to avoid confusion. Guidance from live testing:
Terminal management:
+ button in the VS Code terminal panel, or use `Ctrl+Shift+``)cd to the correct directory — new terminals
open at the repo root, not where they left offServer lifecycle:
Running on http://127.0.0.1:5000"Ctrl+C)Background execution alternative:
For simpler cases where students don't need to see server output, consider using background
execution with &:
python auth_server.py &
python secure_server.py &
sleep 2 # wait for servers to start
python client.py
Cleanup between labs:
kill %1 %2 # kills background jobs by job number
kill $(lsof -t -i:5000) # kills process on specific port
Interactive programs:
input() prompt), tell students to use Ctrl+C
to exit, not "quit" or "exit" — many programs will try to process those as valid inputInclude a .github/copilot-instructions.md with the "Explain-this-app" template that helps students
understand code files through a structured explanation format (what it does, high-level flow,
key building blocks, data flow, safe experiments, debug checklist).
The README should include:
When creating a new course or performing a major update, generate an anticipated-qa.md document
to help the instructor prepare for likely student questions. This is especially valuable for
AI/ML courses where the landscape changes rapidly.
# Anticipated Q&A: [Course Title]
**Generated**: [date]
**Course version**: [version]
## Section: [Section Name] (Labs X-Y)
**Q: [Question]?**
A: [Answer]
## General / Cross-Cutting Questions
**Q: [Question]?**
A: [Answer]
Aim for 25-40 questions total. Weight toward conceptually dense sections and newer/evolving topics. Include timing estimates per lab and suggested break points as an appendix.
All materials should include:
<p align="center">
<b>For educational use only by the attendees of our workshops.</b>
</p>
<p align="center">
<b>(c) [YEAR] Tech Skills Transformations and Brent C. Laster. All rights reserved.</b>
</p>