Generate music using ElevenLabs Music API. Use when creating instrumental tracks, songs with lyrics, background music, jingles, or any AI-generated music composition. Supports prompt-based generation, composition plans for granular control, and detailed output with metadata.
Generate music from text prompts - supports instrumental tracks, songs with lyrics, and fine-grained control via composition plans.
Setup: See Installation Guide. For JavaScript, use
@elevenlabs/*packages only.
from elevenlabs import ElevenLabs
client = ElevenLabs()
audio = client.music.compose(
prompt="A chill lo-fi hip hop beat with jazzy piano chords",
music_length_ms=30000
)
with open("output.mp3", "wb") as f:
for chunk in audio:
f.write(chunk)
import { ElevenLabsClient } from "@elevenlabs/elevenlabs-js";
import { createWriteStream } from "fs";
const client = new ElevenLabsClient();
const audio = await client.music.compose({
prompt: "A chill lo-fi hip hop beat with jazzy piano chords",
musicLengthMs: 30000,
});
audio.pipe(createWriteStream("output.mp3"));
curl -X POST "https://api.elevenlabs.io/v1/music" \
-H "xi-api-key: $ELEVENLABS_API_KEY" -H "Content-Type: application/json" \
-d '{"prompt": "A chill lo-fi beat", "music_length_ms": 30000}' --output output.mp3
| Method | Description |
|---|---|
music.compose | Generate audio from a prompt or composition plan |
music.composition_plan.create | Generate a structured plan for fine-grained control |
music.compose_detailed | Generate audio + composition plan + metadata |
music.video_to_music | Generate background music from one or more uploaded video files |
music.upload | Upload an audio file for later inpainting workflows and optionally extract its composition plan |
See API Reference for full parameter details.
music.upload is available to enterprise clients with access to the inpainting feature.
Generate background music that follows one or more uploaded video clips. The API combines videos
in order, accepts an optional natural-language description, and lets you steer style with up to 10
tags such as upbeat or cinematic.
from elevenlabs import ElevenLabs
client = ElevenLabs()
audio = client.music.video_to_music(
videos=["trailer.mp4"],
description="Build suspense, then resolve with a warm cinematic finish.",
tags=["cinematic", "suspenseful", "uplifting"],
)
with open("video-score.mp3", "wb") as f:
for chunk in audio:
f.write(chunk)
curl -X POST "https://api.elevenlabs.io/v1/music/video-to-music" \
-H "xi-api-key: $ELEVENLABS_API_KEY" \
-F "[email protected]" \
-F "description=Build suspense, then resolve with a warm cinematic finish." \
-F "tags=cinematic" \
-F "tags=suspenseful" \
-F "tags=uplifting" \
--output video-score.mp3
Constraints from the current API schema:
description for high-level musical direction and tags for concise style cuesFor granular control, generate a composition plan first, modify it, then compose:
plan = client.music.composition_plan.create(
prompt="An epic orchestral piece building to a climax",
music_length_ms=60000
)
# Inspect/modify styles and sections
print(plan.positiveGlobalStyles) # e.g. ["orchestral", "epic", "cinematic"]
audio = client.music.compose(
composition_plan=plan,
music_length_ms=60000
)
bad_prompt errors include a prompt_suggestion with alternative phrasingbad_composition_plan errors include a composition_plan_suggestion