Use when creating Android UI automation workflows, recording tap sequences, or building replayable device scripts. Triggers on tasks involving Android device interaction, accessibility selectors, or workflow-script.json files. Do NOT use for pure canvas games, OpenGL rendering, or custom drawing surfaces without standard UI elements.
Core insight: Authoring is observable. Runtime is blind.
During authoring, you see the device, probe selectors, and retry. During runtime, the child skill cannot see the page and improvise. Every decision you make must compensate for this asymmetry.
| Situation | Mode | Start |
|---|---|---|
| No workspace | New | Phase 0 |
| Priority | Objective | Meaning |
|---|---|---|
| 1 | Stable generation | Mother skill reliably produces a verifiable child skill |
| 2 |
| Execution success |
| Child skill runs correctly on the target device |
| 3 | Generality | System handles a wide range of task types |
When rules conflict, higher priority wins.
Load references only when entering the phase that needs them.
Loading discipline:
| Phase | Type | What | Gate | Reference |
|---|---|---|---|---|
| 0 | Gate | Collect inputs | All inputs confirmed | admission-guide.md |
| 1 | Gate | Init workspace | init succeeds | — |
| 2 | Work | Record steps | All walks accepted + last step verified | recording-guide.md |
| 3 | Work | Compile + review | quality-gate ≥ 90 | compilation-contract.md + compilation-checklist.md |
| 4 | Gate | Package + deliver | Child skill package complete | child-skill-guide.md |
Additional references (load on demand):
Three recording rules (walk-is-evidence, decide-before-walk, recover-by-walk) and selector discipline govern all phases. See recording-guide.md § Three Rules.
STOP and ask the user when:
Set both before running any command:
$SKILL_DIR = this skill's install location (contains scripts/, references/, agents/). When loaded by an AI agent, resolve from the skill's own file path. Example: if SKILL.md is at ~/.claude/skills/workflow-skill-creator/SKILL.md, then $SKILL_DIR=~/.claude/skills/workflow-skill-creator.$DIR = the workspace root created by init (e.g., ./my-skill). All commands treat --dir $DIR as workspace root and access $DIR/authoring/ internally — never pass the authoring/ subdirectory as --dir.All CLI commands use node $SKILL_DIR/scripts/skill_cli.js or node $SKILL_DIR/scripts/device_cli.js. Full command reference: tools.md.
User provides task + device IP. Agent resolves everything else. Full guide: admission-guide.md.
Do NOT proceed to Phase 1 until all inputs confirmed.
node $SKILL_DIR/scripts/skill_cli.js init --dir $DIR --name <skill-name> --task "<task>" --base-url $BASE_URL --app <pkg> --app-name "<display-name>"
All parameters come from Phase 0 resolution. init resets device to home screen, grants all permissions to the target app, and captures initial checkpoint. The first recorded step validates against this checkpoint.
If the task starts from inside the app, the first recorded step should be launch with the target package — do NOT navigate manually before recording.
Full rules: recording-guide.md.
init → strategy → status (gives first walk command) → [ decide → walk → confirm ] × N
Strategy first: Before the first walk, analyze the task — classify each target (position vs identity vs dynamic), plan verify approach (static vs structural), determine shortest path. See recording-guide.md § Strategy.
Walk is the only device interaction during recording. All operations go through walk shortcuts: --launch, --selector, --scroll, --key. Mistakes are cheap: walk --delete-step N. Each walk returns a diff with recommended_verify and a ready-to-run next command.
Do NOT proceed to Phase 3 until all walks accepted and last step verified.
Dispatching sub-agents: Use your platform's subagent/tool capability (e.g., Claude Code Agent tool, Codex task). Pass:
agents/quality-reviewer.md)"Review child skill artifacts in workspace: $DIR"$DIR/authoring/quality-review.jsonIf your platform does not support subagents, follow the .md file yourself (self-review fallback).
compile-view → compile-plan → compile-write → quality-review → quality-gate
compile-write is fully automated — generates the complete child skill package and validates it. compile-plan is a dry-run preview. The AI does not write workflow JSON manually. Details: compilation-contract.md.
quality-review is a mandatory pipeline stage:
$DIR/authoring/quality-review.json with "self_reviewed": true added at root level.quality-gate expects $DIR/authoring/quality-review.json with overall_score at JSON root level:
{"verdict":"PASS","overall_score":95,"layers":{...},"fix_suggestions":[...]}
Score < 90 → Evidence Repair Loop (no device needed):
fix_suggestions from quality-review.jsonauthoring/dumps/step{N}_after.dump.txt)repair --step N --selectorrepair --step N --selectorrepair --step N --verifyrepair --delete-step Ncompile-write → quality-review → quality-gateRepair command: repair --dir $DIR --step N --selector '...' / --verify '...' / --delete-step N. Modifies session.json without device. See tools.md.
Generate child skill package per child-skill-guide.md.
The package includes SKILL.md, assets/workflow-script.json, assets/business-spec.json, scripts/run.js, and agents/openai.yaml. The user is responsible for running and verifying the child skill on their device.