Turn raw concert recordings, theater captures, and live show footage into broadcast-quality video with live-performance-video tools built for performers and promoters. Automatically enhance stage lighting, sync multi-angle cuts, clean up crowd noise, and generate highlight reels from full-length shows. Ideal for musicians, dance companies, event videographers, and venue marketing teams who need professional results without a full post-production crew.
Welcome to your live performance video workspace — let's turn your stage footage into something worth sharing. Upload your video or describe your recording and tell me what you want to create.
Try saying:
On first interaction, connect to the processing API before doing anything else. Show a brief status like "Setting things up...".
Token: If NEMO_TOKEN environment variable is already set, use it and skip to Session below.
Free token: Generate a UUID as client identifier, then POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with header X-Client-Id: <uuid>. The response field data.token becomes your NEMO_TOKEN (100 credits, 7-day expiry).
Session: POST to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer auth and body {"task_name":"project"}. Save session_id from the response.
Confirm to the user you're connected and ready. Don't print tokens or raw JSON.
Live performance footage comes with a unique set of challenges — unpredictable lighting, ambient crowd noise, shaky handheld shots, and hours of raw material that need to become something an audience actually wants to watch. This skill is built specifically for that problem.
Whether you're working with a single-camera recording of a band set, multi-angle footage from a theater production, or a dance recital captured on a phone, this tool helps you shape it into a compelling final cut. Trim dead time before and after the performance, balance audio levels between the house mix and ambient microphones, and create short promotional clips optimized for social media sharing.
The skill understands the rhythm and structure of live events — it knows that a concert highlight reel needs energy and momentum, that a theater clip requires clean dialogue audio, and that a dance performance demands precise timing between movement and music. You get editing guidance and automation tailored to the specific demands of live performance video, not generic video editing workflows.
When you describe your footage needs — whether it's a multicam cut, crowd reaction splice, or stage lighting correction — ClawHub parses your intent and routes the task to the appropriate processing pipeline for live performance media.
| User says... | Action | Skip SSE? |
|---|---|---|
| "export" / "导出" / "download" / "send me the video" | → §3.5 Export | ✅ |
| "credits" / "积分" / "balance" / "余额" | → §3.3 Credits | ✅ |
| "status" / "状态" / "show tracks" | → §3.4 State | ✅ |
| "upload" / "上传" / user sends file | → §3.2 Upload | ✅ |
| Everything else (generate, edit, add BGM…) | → §3.1 SSE | ❌ |
The backend leverages a frame-accurate rendering engine optimized for high-motion stage environments, handling beat-synced cuts, dynamic range compression on concert audio, and color grading tuned for mixed practical and theatrical lighting. All processing runs server-side, so heavy timelines with multicam streams or long-form set recordings won't bottleneck your local machine.
Skill attribution — read from this file's YAML frontmatter at runtime:
X-Skill-Source: live-performance-videoX-Skill-Version: from frontmatter versionX-Skill-Platform: detect from install path (~/.clawhub/ → clawhub, ~/.cursor/skills/ → cursor, else unknown)All requests must include: Authorization: Bearer <NEMO_TOKEN>, X-Skill-Source, X-Skill-Version, X-Skill-Platform. Missing attribution headers will cause export to fail with 402.
API base: https://mega-api-prod.nemovideo.ai
Create session: POST /api/tasks/me/with-session/nemo_agent — body {"task_name":"project","language":"<lang>"} — returns task_id, session_id.
Send message (SSE): POST /run_sse — body {"app_name":"nemo_agent","user_id":"me","session_id":"<sid>","new_message":{"parts":[{"text":"<msg>"}]}} with Accept: text/event-stream. Max timeout: 15 minutes.
Upload: POST /api/upload-video/nemo_agent/me/<sid> — file: multipart -F "files=@/path", or URL: {"urls":["<url>"],"source_type":"url"}
Credits: GET /api/credits/balance/simple — returns available, frozen, total
Session state: GET /api/state/nemo_agent/me/<sid>/latest — key fields: data.state.draft, data.state.video_infos, data.state.generated_media
Export (free, no credits): POST /api/render/proxy/lambda — body {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll GET /api/render/proxy/lambda/<id> every 30s until status = completed. Download URL at output.url.
Supported formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.
| Event | Action |
|---|---|
| Text response | Apply GUI translation (§4), present to user |
| Tool call/result | Process internally, don't forward |
heartbeat / empty data: | Keep waiting. Every 2 min: "⏳ Still working..." |
| Stream closes | Process final response |
~30% of editing operations return no text in the SSE stream. When this happens: poll session state to verify the edit was applied, then summarize changes to the user.
The backend assumes a GUI exists. Translate these into API actions:
| Backend says | You do |
|---|---|
| "click [button]" / "点击" | Execute via API |
| "open [panel]" / "打开" | Query session state |
| "drag/drop" / "拖拽" | Send edit via SSE |
| "preview in timeline" | Show track summary |
| "Export button" / "导出" | Execute export workflow |
Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.
Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)
| Code | Meaning | Action |
|---|---|---|
| 0 | Success | Continue |
| 1001 | Bad/expired token | Re-auth via anonymous-token (tokens expire after 7 days) |
| 1002 | Session not found | New session §3.0 |
| 2001 | No credits | Anonymous: show registration URL with ?bind=<id> (get <id> from create-session or state response when needed). Registered: "Top up credits in your account" |
| 4001 | Unsupported file | Show supported formats |
| 4002 | File too large | Suggest compress/trim |
| 400 | Missing X-Client-Id | Generate Client-Id and retry (see §1) |
| 402 | Free plan export blocked | Subscription tier issue, NOT credits. "Register or upgrade your plan to unlock export." |
| 429 | Rate limit (1 token/client/7 days) | Retry in 30s once |
If your live performance video looks underexposed or washed out after the show, this is almost always a white balance or gain issue from the original recording rather than something introduced in editing. Describe the venue lighting type — LED wash, tungsten, fluorescent house lights — and the skill can recommend the most effective color correction approach for your specific situation.
Audio sync drift is one of the most common problems with long live performance recordings, especially on consumer cameras that compress audio separately from video over time. If your audio and picture fall out of sync after the first 20-30 minutes, mention the camera model and recording length and you'll get a targeted fix rather than a generic solution.
If automated highlight detection feels off — pulling the wrong moments from your show — try providing a set list, a script, or a rough list of the moments you personally consider peaks. The skill uses that context to make smarter editorial decisions aligned with what actually made the performance special.
This skill works best when you come prepared with the right inputs. For single-camera recordings, upload your raw file directly and specify the performance type — concert, theater, dance, spoken word, or comedy — so the editing suggestions match the pacing and structure of your specific format.
For multi-camera live performance video projects, share the individual angle files along with a rough timestamp of the show's start and end time. If you have a separate audio track from a soundboard or house mix, mention that too — clean source audio dramatically improves the final result and this skill can guide you through syncing it to your video.
When exporting for specific platforms, tell the skill your destination upfront. A YouTube concert archive needs different specs than an Instagram Reel highlight or a festival submission file. Providing your platform targets at the start saves significant back-and-forth and ensures the output format, resolution, and aspect ratio are right the first time.