Use when translating captions/captions to another language. Supports bilingual output and context-aware translation. Default uses Claude native, Gemini API optional.
Default: Claude native translation (no API key needed)
Use Gemini API only when user explicitly requests it.
_Claude_{lang} suffixUse CLI when user requests Gemini:
omnicaptions translate input.srt -l zh --bilingual
Output: input_Gemini_zh.srt
/omnicaptions:transcribe)/omnicaptions:convert)pip install omni-captions-skills --extra-index-url https://lattifai.github.io/pypi/simple/
Priority: GEMINI_API_KEY env → .env file → ~/.config/omnicaptions/config.json
If not set, ask user: Please enter your Gemini API key (get from https://aistudio.google.com/apikey):
Then run with -k <key>. Key will be saved to config file automatically.
LLM-based translation is superior to traditional machine translation because it understands context across multiple lines:
| Approach | Problem | Result |
|---|---|---|
| Line-by-line | No context | Robotic, disconnected translations |
| Batch + Context | Sees surrounding lines | Natural, coherent dialogue |
┌─────────────────────────────────────────┐
│ Batch size: 30 lines │
│ Context: 5 lines before/after │
├─────────────────────────────────────────┤
│ [5 previous lines] → context │
│ [30 current lines] → translate │
│ [5 next lines] → preview │
└─────────────────────────────────────────┘
Benefits:
# Original + Translation (for language learning)
omnicaptions translate input.srt -l zh --bilingual
Output example:
1