Compose MIDI arrangements and convert between JSON spec and MIDI files. Two modes: 'compose' uses LLM (via /prompt-lab prompt) to create a full arrangement from annotated lyrics + reference MIDI fragments + heart tags. 'from-spec' and 'to-spec' are mechanical JSON↔MIDI conversion via pretty_midi. The arrangement is the creative heart of the music pipeline — it decides what every instrument plays at every beat.
STOP. READ THIS ENTIRE SKILL.MD BEFORE CALLING ANY ENDPOINT.
Two capabilities in one skill:
compose)LLM creates a full multi-track arrangement given:
./run.sh compose \
--lyrics annotated-lyrics.json \
--references consume_midi_results.json \
--arrangement arrangement_notes.md \
--heart "anger,sadness,fear" \
--out piano-roll-spec.json
Prompt managed by /prompt-lab — the composition prompt needs to understand:
Output: piano-roll-spec.json with notes for vocal, bass, drums, keys, guitar, synth.
from-spec, to-spec)Mechanical JSON↔MIDI conversion via pretty_midi. No LLM involved.
# JSON → MIDI (compile the arrangement to playable file)
./run.sh from-spec --spec piano-roll-spec.json --out arrangement.mid
# MIDI → JSON (import existing MIDI for analysis or editing)
./run.sh to-spec --midi input.mid --out spec.json
The MIDI file has separate tracks per instrument. This is the song's intermediate representation — between creative decisions and audio rendering.
midi_utils.py was originally in /create-music. It moved here because
arrangement composition is a distinct creative step, not a sub-feature
of audio generation.