Install, configure, and start FireRed-OpenStoryline from source on a local machine. Use when a user asks to set up OpenStoryline, troubleshoot installation, download required resources, fill config.toml API keys, or launch the MCP and web services, as well as Chinese requests like “安装 OpenStoryline”, “配置 OpenStoryline”, “启动 OpenStoryline”, “把 OpenStoryline 跑起来”, “修复 OpenStoryline 安装问题”, or “排查 OpenStoryline 启动失败”.
Use this skill when the task is to install or repair a local source checkout of FireRed-OpenStoryline.
Keep the workflow deterministic:
venv install unless the user explicitly asks for Docker or conda..storyline models and resource/ assetsconfig.toml model settingsCheck these first:
git>= 3.11ffmpegwgetunzipOptional:
dockercondaIf ffmpeg, wget, or unzip are missing, install them through the OS package manager before continuing.
Examples:
macOS with Homebrew:
brew install ffmpeg wget unzip
Debian/Ubuntu:
sudo apt-get update
sudo apt-get install -y ffmpeg wget unzip
If no supported package manager or permission is available, stop and report the missing system dependency clearly.
First prefer any interpreter that already exists and passes version checks:
>= 3.11>= 3.11>= 3.11, but only if basic stdlib modules workValidate candidate interpreters before using them:
/path/to/python -c "import ssl, sqlite3, venv; print('stdlib_ok')"
If no supported interpreter already exists, peferr conda fallback:
conda create -y -n openstoryline-py311 python=3.11
conda run -n openstoryline-py311 python --version
conda run -n openstoryline-py311 python -m venv .venv
After a supported interpreter is found, always create a repo-local .venv and continue using .venv/bin/python for install, config validation, and service startup.
Do not duplicate the rest of the workflow for pyenv or conda unless the user explicitly asks to stay inside a conda environment.
If you don't have a local repository yet, clone the repository first.
git clone https://github.com/FireRedTeam/FireRed-OpenStoryline.git
cd FireRed-OpenStoryline
From the repo root:
/path/to/python -m venv .venv
.venv/bin/python -m pip install --upgrade pip
.venv/bin/python -m pip install -r requirements.txt
bash download.sh
Notes:
download.sh pulls both model weights and a large resource archive. It can take a long time and may resume after network drops.Before starting the app, update config.toml.
You can use scripts/update_config.py.
At minimum, fill:
.venv/bin/python scripts/update_config.py --config ./config.toml --set llm.model=REPLACE_WITH_REAL_MODEL
.venv/bin/python scripts/update_config.py --config ./config.toml --set llm.base_url=REPLACE_WITH_REAL_URL
.venv/bin/python scripts/update_config.py --config ./config.toml --set llm.api_key=sk-REPLACE_WITH_REAL_KEY
.venv/bin/python scripts/update_config.py --config ./config.toml --set vlm.model=REPLACE_WITH_REAL_MODEL
.venv/bin/python scripts/update_config.py --config ./config.toml --set vlm.base_url=REPLACE_WITH_REAL_URL
.venv/bin/python scripts/update_config.py --config ./config.toml --set vlm.api_key=sk-REPLACE_WITH_REAL_KEY
Optional but common:
search_media.pexels_api_key for searching mediagenerate_voiceover.providers.* (choose one provider)Run these checks before saying installation is complete:
.venv/bin/pip check
PYTHONPATH=src .venv/bin/python -c "from open_storyline.config import load_settings; load_settings('config.toml'); print('config_ok')"
Also confirm key resources exist:
test -f .storyline/models/transnetv2-pytorch-weights.pth
test -d resource/bgms
There are two common paths. These are long-running processes. Do not wait for them to exit normally. Treat successful startup log lines or confirmed listening ports as success, and keep the services running in separate shells/sessions as needed.
Manual start:
PYTHONPATH=src .venv/bin/python -m open_storyline.mcp.server
In a second shell:
PYTHONPATH=src .venv/bin/python -m uvicorn agent_fastapi:app --host 127.0.0.1 --port 8005
After a successful install:
.venv/ exists127.0.0.1:8001)127.0.0.1:8005, though run.sh defaults may differ)download.sh is slow or interruptedSymptom:
Fix:
wget continue; it supports resume behavior hereSymptom:
operation not permitted while binding 127.0.0.1 or 0.0.0.0Fix:
127.0.0.1 over 0.0.0.0 unless external access is requiredWhen reporting status to the user, separate:
Do not say "installation complete" if only the Python packages are installed but the resource bundle is still missing.38:["$","$L41",null,{"content":"$42","frontMatter":{"name":"openstoryline-install","description":"Install, configure, and start FireRed-OpenStoryline from source on a local machine. Use when a user asks to set up OpenStoryline, troubleshoot installation, download required resources, fill config.toml API keys, or launch the MCP and web services, as well as Chinese requests like “安装 OpenStoryline”, “配置 OpenStoryline”, “启动 OpenStoryline”, “把 OpenStoryline 跑起来”, “修复 OpenStoryline 安装问题”, or “排查 OpenStoryline 启动失败”."}}]