Tools, patterns, and utilities for generating professional music with realistic instrument sounds. Write custom compositions using music21 or learn from existing MIDI files.
IMPORTANT: This file is located at /mnt/skills/private/music-generation/SKILL.md
If you need to reference this skill again during your session, read that exact path directly. Do not explore directories or use find commands - just read the file path above.
This skill provides tools and patterns for music composition, not pre-baked solutions. You should use your intelligence and the music21 library to compose dynamically based on user requests.
Core Principle: Write custom code that composes music algorithmically rather than calling functions with hardcoded melodies.
Run the automated installer for complete setup:
bash /mnt/skills/private/music-generation/install.sh
This installs all system dependencies, Python packages, and verifies the installation.
Note: The install script may display "error: externally-managed-environment" messages at the end. These are expected and can be safely ignored - the dependencies are already installed. If you see these messages, the installation was successful.
Alternatively, install dependencies manually:
System Dependencies:
apt-get update
apt-get install -y fluidsynth fluid-soundfont-gm fluid-soundfont-gs ffmpeg
Python Dependencies:
pip install -r /mnt/skills/private/music-generation/requirements.txt
The requirements.txt includes: music21, midi2audio, pydub, mido, numpy, scipy.
Traditional Pipeline (Orchestral/Acoustic):
/usr/share/sounds/sf2/FluidR3_GM.sf2 (141MB, General MIDI soundfont for orchestral/classical)/usr/share/sounds/sf2/default.sf2 (symlink to best available)Electronic Pipeline:
from music21 import stream, note, chord, instrument, tempo, dynamics
from midi2audio import FluidSynth
from pydub import AudioSegment
# 1. Create score and parts
score = stream.Score()
violin_part = stream.Part()
violin_part.insert(0, instrument.Violin())
violin_part.insert(0, tempo.MetronomeMark(number=120))
# 2. Generate notes algorithmically
for measure in range(16):
violin_part.append(note.Note('E5', quarterLength=1.0))
violin_part.append(note.Note('G5', quarterLength=1.0))
violin_part.append(note.Note('A5', quarterLength=2.0))
# 3. Export to MIDI
score.append(violin_part)
midi_path = '/mnt/user-data/outputs/composition.mid'
score.write('midi', fp=midi_path)
# 4. Render with FluidSynth
fs = FluidSynth('/usr/share/sounds/sf2/FluidR3_GM.sf2')
wav_path = '/mnt/user-data/outputs/composition.wav'
fs.midi_to_audio(midi_path, wav_path)
# 5. Convert to MP3
audio = AudioSegment.from_wav(wav_path)
mp3_path = '/mnt/user-data/outputs/composition.mp3'
audio.export(mp3_path, format='mp3', bitrate='192k')
/mnt/user-data/outputs/instrument.Violin(), instrument.Violoncello(), instrument.Piano(), instrument.Trumpet(), etc.CRITICAL: This skill supports TWO rendering pipelines. You MUST choose based on the musical genre:
Use when creating:
How to render:
# After composing with music21 and exporting MIDI...
from midi2audio import FluidSynth
from pydub import AudioSegment
fs = FluidSynth('/usr/share/sounds/sf2/FluidR3_GM.sf2')
fs.midi_to_audio(midi_path, wav_path)
audio = AudioSegment.from_wav(wav_path)
audio.export(mp3_path, format='mp3', bitrate='192k')
Use when creating:
How to render:
# After composing with music21, using mido for instruments, and exporting MIDI...
import subprocess
# Use the electronic rendering script
result = subprocess.run([
'python',
'/mnt/skills/private/music-generation/scripts/render_electronic.py',
midi_path,
mp3_path
], capture_output=True, text=True)
print(result.stdout)
if result.returncode != 0:
print(f"Error: {result.stderr}")
Why this matters:
Drum Synthesis:
The electronic renderer uses real-time drum synthesis (no external samples needed). All drum sounds (kicks, snares, hi-hats, claps) are synthesized on-the-fly with genre-specific parameters.
Example: House Track
# 1. Compose with music21 (same as always)
score = stream.Score()
drums = stream.Part()
bass = stream.Part()
pads = stream.Part()
# ... compose your music
# 2. Export MIDI
midi_path = '/mnt/user-data/outputs/deep_house.mid'
score.write('midi', fp=midi_path)
# 3. Fix instruments with mido (INSERT program_change messages)
from mido import MidiFile, Message
mid = MidiFile(midi_path)
for i, track in enumerate(mid.tracks):
if i == 1: # Drums
for msg in track:
if hasattr(msg, 'channel'):
msg.channel = 9
elif i == 2: # Bass - INSERT program_change
insert_pos = 0
for j, msg in enumerate(track):
if msg.type == 'track_name':
insert_pos = j + 1
break
track.insert(insert_pos, Message('program_change', program=38, time=0))
mid.save(midi_path)
# 4. Render with ELECTRONIC pipeline with deep_house preset!
import subprocess
subprocess.run([
'python',
'/mnt/skills/private/music-generation/scripts/render_electronic.py',
midi_path,
'/mnt/user-data/outputs/deep_house.mp3',
'--genre', 'deep_house'
])
The electronic renderer includes pre-tuned synthesis presets with supersaw lead synthesis for thick, professional EDM sounds:
Supersaw Synthesis (Swedish House Mafia / Progressive House Sound):
The electronic renderer now includes unison voice synthesis for fat, buzzy leads:
How It Works:
This replicates the classic supersaw sound from Swedish House Mafia, Avicii, and modern EDM productions.
Each preset tunes:
The electronic renderer uses frequency-aware volume balancing to prevent any instrument from overpowering the mix:
How it works:
Why this matters:
To see all available presets:
python /mnt/skills/private/music-generation/scripts/render_electronic.py --list-genres
For advanced control, you can create custom preset JSON files:
{
"drums": {
"kick": {"pitch": 52.0, "decay": 0.6, "punch": 0.9},
"snare": {"tone_mix": 0.25, "snap": 0.8}
},
"bass": {
"waveform": "sawtooth",
"cutoff": 180,
"resonance": 0.7
},
"pad": {
"attack": 1.0,
"brightness": 0.35
}
}
Then use with --preset:
python render_electronic.py track.mid output.mp3 --preset my_preset.json
For classical pieces or complex compositions, you can:
python /mnt/skills/private/music-generation/scripts/midi_inventory.py \
path/to/mozart.mid \
/mnt/user-data/outputs/mozart_structure.json
This extracts:
import json
# Load extracted structure
with open('/mnt/user-data/outputs/mozart_structure.json', 'r') as f:
structure = json.load(f)
# Modify instruments, notes, timing, etc.
structure['tracks']['track-0']['instrument'] = 'violin' # Change piano to violin!
# Save modified structure
with open('/mnt/user-data/outputs/mozart_violin.json', 'w') as f:
json.dump(structure, f)
python /mnt/skills/private/music-generation/scripts/midi_render.py \
/mnt/user-data/outputs/mozart_violin.json \
/mnt/user-data/outputs/mozart_violin.mp3
This workflow lets you "recreate" any classical piece with different instruments!
All scripts are located in /mnt/skills/private/music-generation/scripts/:
Main Workflow Scripts:
render_electronic.py - Electronic music renderer with real-time synthesis (drums, bass, pads, leads)midi_inventory.py - Extract complete structure from ANY MIDI file to JSON formatmidi_render.py - Render JSON music structure to MP3 using FluidSynthmidi_transform.py - Generic MIDI transformations (transpose, tempo change, instrument swap)audio_validate.py - Validate audio file quality and formatSynthesis Engine (used by render_electronic.py):
drum_synthesizer.py - Synthesizes kicks, snares, hi-hats, claps on-the-flymelodic_synthesizer.py - Synthesizes bass, pads, and lead sounds using subtractive synthesissynthesis_presets.py - Genre presets (deep_house, techno, trance, ambient, etc.)midi_utils.py - MIDI parsing utilities for extracting events and metadata__init__.py - Python package marker (allows importing scripts as modules)Utility Scripts:
CRITICAL: music21 has limited instrument support. For most sounds (especially electronic), you MUST use mido to set program numbers after export.
# Piano (0-7)