Turn raw video clips into crisp, looping GIFs ready for sharing, embedding, or showcasing with the ffmpeg-gif-maker skill. Whether you're clipping a highlight reel, creating a reaction GIF, or animating a product demo, this skill handles frame rate tuning, palette optimization, resolution scaling, and loop control — all through simple conversational commands. Built for content creators, developers, and designers who want GIF output without wrestling with command-line flags.
Welcome to the FFmpeg GIF Maker! I can convert any video clip into a clean, optimized GIF with full control over timing, size, and quality. Tell me what clip you want to turn into a GIF and let's get started!
Try saying:
Before handling any user request, establish a connection to the backend API. Show a brief status like "Connecting...".
If NEMO_TOKEN is in the environment, use it directly and create a session. Otherwise, acquire a free starter token:
https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with the X-Client-Id headertoken with 100 free credits valid for 7 days — use it as NEMO_TOKENThen create a session by POSTing to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer authorization and body {"task_name":"project","language":"en"}. The session_id in the response is needed for all following requests.
Tell the user you're ready. Keep the technical details out of the chat.
Creating a high-quality GIF from video footage has always meant choosing between bloated file sizes and washed-out colors. The ffmpeg-gif-maker skill changes that by giving you precise control over every aspect of the conversion process — without needing to memorize a single FFmpeg flag.
You can clip any segment of a video, dial in the frame rate for smooth or stylized motion, scale the output to fit a specific platform, and let the skill auto-generate an optimized color palette so your GIF looks vibrant instead of muddy. Whether you're pulling a two-second reaction clip from a 10-minute recording or looping a product animation for a landing page, the process is the same: describe what you want, and get a ready-to-use GIF back.
This skill is especially useful for social media managers, developers documenting UI interactions, game streamers capturing highlight moments, and anyone who's ever opened a GIF in a browser and thought 'I could make something better than this.' Now you can — fast.
When you submit a video clip with your target parameters — frame rate, crop region, palette optimization — the skill parses your intent and routes the job to the appropriate FFmpeg processing pipeline.
| User says... | Action | Skip SSE? |
|---|---|---|
| "export" / "导出" / "download" / "send me the video" | → §3.5 Export | ✅ |
| "credits" / "积分" / "balance" / "余额" | → §3.3 Credits | ✅ |
| "status" / "状态" / "show tracks" | → §3.4 State | ✅ |
| "upload" / "上传" / user sends file | → §3.2 Upload | ✅ |
| Everything else (generate, edit, add BGM…) | → §3.1 SSE | ❌ |
All transcoding runs on a cloud-hosted FFmpeg backend that handles palette generation, dithering modes, and frame extraction server-side, so no local FFmpeg install is required. The API accepts video URLs or uploads and returns an optimized GIF with configurable fps, scale, and loop settings applied.
Skill attribution — read from this file's YAML frontmatter at runtime:
X-Skill-Source: ffmpeg-gif-makerX-Skill-Version: from frontmatter versionX-Skill-Platform: detect from install path (~/.clawhub/ → clawhub, ~/.cursor/skills/ → cursor, else unknown)All requests must include: Authorization: Bearer <NEMO_TOKEN>, X-Skill-Source, X-Skill-Version, X-Skill-Platform. Missing attribution headers will cause export to fail with 402.
API base: https://mega-api-prod.nemovideo.ai
Create session: POST /api/tasks/me/with-session/nemo_agent — body {"task_name":"project","language":"<lang>"} — returns task_id, session_id.
Send message (SSE): POST /run_sse — body {"app_name":"nemo_agent","user_id":"me","session_id":"<sid>","new_message":{"parts":[{"text":"<msg>"}]}} with Accept: text/event-stream. Max timeout: 15 minutes.
Upload: POST /api/upload-video/nemo_agent/me/<sid> — file: multipart -F "files=@/path", or URL: {"urls":["<url>"],"source_type":"url"}
Credits: GET /api/credits/balance/simple — returns available, frozen, total
Session state: GET /api/state/nemo_agent/me/<sid>/latest — key fields: data.state.draft, data.state.video_infos, data.state.generated_media
Export (free, no credits): POST /api/render/proxy/lambda — body {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll GET /api/render/proxy/lambda/<id> every 30s until status = completed. Download URL at output.url.
Supported formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.
| Event | Action |
|---|---|
| Text response | Apply GUI translation (§4), present to user |
| Tool call/result | Process internally, don't forward |
heartbeat / empty data: | Keep waiting. Every 2 min: "⏳ Still working..." |
| Stream closes | Process final response |
~30% of editing operations return no text in the SSE stream. When this happens: poll session state to verify the edit was applied, then summarize changes to the user.
The backend assumes a GUI exists. Translate these into API actions:
| Backend says | You do |
|---|---|
| "click [button]" / "点击" | Execute via API |
| "open [panel]" / "打开" | Query session state |
| "drag/drop" / "拖拽" | Send edit via SSE |
| "preview in timeline" | Show track summary |
| "Export button" / "导出" | Execute export workflow |
Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.
Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)
| Code | Meaning | Action |
|---|---|---|
| 0 | Success | Continue |
| 1001 | Bad/expired token | Re-auth via anonymous-token (tokens expire after 7 days) |
| 1002 | Session not found | New session §3.0 |
| 2001 | No credits | Anonymous: show registration URL with ?bind=<id> (get <id> from create-session or state response when needed). Registered: "Top up credits in your account" |
| 4001 | Unsupported file | Show supported formats |
| 4002 | File too large | Suggest compress/trim |
| 400 | Missing X-Client-Id | Generate Client-Id and retry (see §1) |
| 402 | Free plan export blocked | Subscription tier issue, NOT credits. "Register or upgrade your plan to unlock export." |
| 429 | Rate limit (1 token/client/7 days) | Retry in 30s once |
Getting the best results from ffmpeg-gif-maker comes down to a few key levers. First, always specify a time range — even if you want the whole clip. Shorter GIFs compress better and loop more naturally, so trimming to the essential 2–6 seconds usually produces the sharpest output.
For color quality, ask for palette optimization explicitly. FFmpeg's two-pass palette generation makes a dramatic difference on footage with gradients or skin tones — the default palette often looks flat by comparison.
Frame rate is your biggest file size dial. Dropping from 24fps to 12fps roughly halves the file size with minimal visible degradation for most content. For UI demos or text animations, 10fps is often enough. For fast action or smooth loops, stick to 15–20fps.
Finally, if your GIF will be embedded in a webpage or Slack, target a max width of 480px. Wider GIFs don't look better on most screens and cost significantly more in file size.
Social Media Clips: The most common use case is pulling a reaction moment or highlight from a longer recording. Provide the timestamp range, request a 480px width, and ask for a 12fps output — this hits the sweet spot for Twitter, Discord, and Reddit embeds without exceeding upload limits.
Product & UI Demos: Developers frequently use ffmpeg-gif-maker to document UI interactions for README files or bug reports. For these, a lossless-looking output matters more than file size. Request high palette quality, 20fps, and a width matching your documentation layout (usually 600–800px).
Looping Animations: For seamless loops — like a loading spinner or animated banner — specify that the clip should loop cleanly. You can also ask for a bounce loop (forward then reverse) to make any clip look intentionally animated without extra editing.
Batch Conversions: If you have multiple clips that need consistent GIF formatting (same dimensions, same frame rate, same palette settings), describe the template once and apply it across all files in a single session.
Getting your first GIF out of ffmpeg-gif-maker takes less than a minute. Here's the fastest path:
Step 1 — Provide your source. Share a video file or link, and tell the skill which segment you want (e.g., '3 seconds starting at 0:07').
Step 2 — Set your output preferences. Mention the target width (e.g., 480px or 640px), desired frame rate (10–24fps), and whether you want palette optimization. If you skip these, sensible defaults are applied automatically.
Step 3 — Request your GIF. The skill will generate the FFmpeg command, process your clip, and return the GIF along with its file size and dimensions so you can decide if you want to adjust.
Step 4 — Iterate if needed. Not happy with the size or quality? Just say 'reduce the frame rate to 10fps' or 'scale it down to 360px' and the skill will re-run with your adjustments. No need to start over from scratch.