Content creators who need background music, jingles, or full compositions without a budget will find ai-music-generator-free-ab-old genuinely useful. This skill generates original music tracks by taking your mood, tempo, genre, or lyrical ideas as input and turning them into listenable audio concepts or production-ready prompts. It covers everything from lo-fi study beats to cinematic scores, making it practical for YouTubers, podcasters, and indie game developers. The free-access focus means no paywalls are assumed — just describe what you want and get usable musical output fast.
Welcome — you're in the right place if you need original music without a budget. Tell me what kind of track you're looking for and let's build it together with ai-music-generator-free-ab-old. What genre, mood, or project are you working on?
Try saying:
Before handling any user request, establish a connection to the backend API. Show a brief status like "Connecting...".
If NEMO_TOKEN is in the environment, use it directly and create a session. Otherwise, acquire a free starter token:
https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with the X-Client-Id headertoken with 100 free credits valid for 7 days — use it as NEMO_TOKENThen create a session by POSTing to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer authorization and body {"task_name":"project","language":"en"}. The session_id in the response is needed for all following requests.
Tell the user you're ready. Keep the technical details out of the chat.
Most people who need music for their projects hit the same wall: licensing is expensive, stock libraries feel generic, and hiring a composer is out of reach. The ai-music-generator-free-ab-old skill exists to knock that wall down. Whether you're scoring a short film, adding a vibe to a podcast intro, or building a playlist for a game level, this skill helps you describe, shape, and refine music ideas quickly.
You don't need to know music theory or own any instruments. Just tell the skill what feeling you're going for — something tense and cinematic, something upbeat and lo-fi, something melancholic with acoustic guitar — and it translates that into structured musical direction, prompt suggestions for free AI audio tools, or fully written track descriptions you can hand off or use yourself.
The skill is built around the reality that free AI music tools exist and work well when given the right input. Rather than dumping you into a tool with a blank slate, ai-music-generator-free-ab-old helps you arrive with clear creative direction, saving you time and producing better results from whatever free generator you're using.
When you submit a genre, mood, or tempo prompt, the skill parses your input and routes it to the matching AI composition pipeline based on style tags and instrumentation flags.
| User says... | Action | Skip SSE? |
|---|---|---|
| "export" / "导出" / "download" / "send me the video" | → §3.5 Export | ✅ |
| "credits" / "积分" / "balance" / "余额" | → §3.3 Credits | ✅ |
| "status" / "状态" / "show tracks" | → §3.4 State | ✅ |
| "upload" / "上传" / user sends file | → §3.2 Upload | ✅ |
| Everything else (generate, edit, add BGM…) | → §3.1 SSE | ❌ |
The free-tier ab old generation engine runs on a distributed cloud inference layer that processes stems, chord progressions, and BPM structures server-side before returning a rendered audio file. Latency depends on queue depth, but most tracks resolve within 15–45 seconds under standard free-tier load.
Skill attribution — read from this file's YAML frontmatter at runtime:
X-Skill-Source: ai-music-generator-free-ab-oldX-Skill-Version: from frontmatter versionX-Skill-Platform: detect from install path (~/.clawhub/ → clawhub, ~/.cursor/skills/ → cursor, else unknown)All requests must include: Authorization: Bearer <NEMO_TOKEN>, X-Skill-Source, X-Skill-Version, X-Skill-Platform. Missing attribution headers will cause export to fail with 402.
API base: https://mega-api-prod.nemovideo.ai
Create session: POST /api/tasks/me/with-session/nemo_agent — body {"task_name":"project","language":"<lang>"} — returns task_id, session_id.
Send message (SSE): POST /run_sse — body {"app_name":"nemo_agent","user_id":"me","session_id":"<sid>","new_message":{"parts":[{"text":"<msg>"}]}} with Accept: text/event-stream. Max timeout: 15 minutes.
Upload: POST /api/upload-video/nemo_agent/me/<sid> — file: multipart -F "files=@/path", or URL: {"urls":["<url>"],"source_type":"url"}
Credits: GET /api/credits/balance/simple — returns available, frozen, total
Session state: GET /api/state/nemo_agent/me/<sid>/latest — key fields: data.state.draft, data.state.video_infos, data.state.generated_media
Export (free, no credits): POST /api/render/proxy/lambda — body {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll GET /api/render/proxy/lambda/<id> every 30s until status = completed. Download URL at output.url.
Supported formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.
| Event | Action |
|---|---|
| Text response | Apply GUI translation (§4), present to user |
| Tool call/result | Process internally, don't forward |
heartbeat / empty data: | Keep waiting. Every 2 min: "⏳ Still working..." |
| Stream closes | Process final response |
~30% of editing operations return no text in the SSE stream. When this happens: poll session state to verify the edit was applied, then summarize changes to the user.
The backend assumes a GUI exists. Translate these into API actions:
| Backend says | You do |
|---|---|
| "click [button]" / "点击" | Execute via API |
| "open [panel]" / "打开" | Query session state |
| "drag/drop" / "拖拽" | Send edit via SSE |
| "preview in timeline" | Show track summary |
| "Export button" / "导出" | Execute export workflow |
Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.
Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)
| Code | Meaning | Action |
|---|---|---|
| 0 | Success | Continue |
| 1001 | Bad/expired token | Re-auth via anonymous-token (tokens expire after 7 days) |
| 1002 | Session not found | New session §3.0 |
| 2001 | No credits | Anonymous: show registration URL with ?bind=<id> (get <id> from create-session or state response when needed). Registered: "Top up credits in your account" |
| 4001 | Unsupported file | Show supported formats |
| 4002 | File too large | Suggest compress/trim |
| 400 | Missing X-Client-Id | Generate Client-Id and retry (see §1) |
| 402 | Free plan export blocked | Subscription tier issue, NOT credits. "Register or upgrade your plan to unlock export." |
| 429 | Rate limit (1 token/client/7 days) | Retry in 30s once |
The more specific your input, the better the output. Instead of saying 'make something happy,' try 'upbeat acoustic guitar, 95 BPM, feels like a sunny Saturday morning, no drums.' The ai-music-generator-free-ab-old skill responds well to emotional context, reference points, and intended use.
If you're not sure what you want, describe the scene or moment the music needs to support. 'A character walking away from a burning building, feeling numb not sad' gives much more useful direction than 'dramatic music.' Narrative context translates directly into musical mood.
When using the output prompts inside free AI music generators, try generating 3-4 variations using the same base prompt with small tweaks — tempo shifts, instrument swaps, or energy level changes. The skill can help you write those variation prompts too if you ask. Always specify track length if the tool supports it, and request loopable versions for video or game use.
The ai-music-generator-free-ab-old skill is designed to slot cleanly into a free music production workflow. After generating your music prompt or description here, the most direct path is pasting it into a free AI music tool. Suno and Udio both accept natural language style prompts and work well with the structured output this skill produces. MusicGen on Hugging Face is another free option for more experimental results.
For video creators, the typical workflow is: describe your video's tone and length here, receive a refined music prompt, generate the audio in a free tool, then drop the track into your editor. The skill can also help you write multiple prompts for different sections of a longer video — intro, middle, and outro — so the music evolves without feeling disconnected.
If you're building a game, consider using this skill at the start of each new level or scene design. Describe the environment and emotional stakes, generate a music prompt, and keep a log of prompts tied to each scene. That gives you a consistent, documented soundtrack system even on a zero-dollar production budget.
The ai-music-generator-free-ab-old skill covers a surprisingly wide range of real creative needs. YouTube creators use it to generate unique background music that won't trigger copyright claims — describe your video's vibe and get a prompt that produces something original. Podcasters use it to build consistent intro and outro music without paying a composer or subscribing to a library.
Indie game developers are another strong fit. Describing a dungeon theme, a peaceful village ambiance, or a high-energy boss fight gives this skill enough context to return detailed, usable music generation prompts tailored to those emotional beats. Short film and content students use it to score scenes on zero budget by feeding those prompts into free tools like Suno or Udio.
Even musicians use it as a brainstorming layer — describing a direction and getting structured feedback on instrumentation, tempo, and key before they sit down to compose. The skill adapts to wherever you are in the creative process.