Use when generating template-driven emoji videos with Alibaba Cloud Model Studio Emoji (`emoji-v1`) from a detected portrait image. Use when producing fixed-style meme or emoji motion clips from a single face image and a selected template ID.
Category: provider
mkdir -p output/aliyun-emoji
python -m py_compile skills/ai/video/aliyun-emoji/scripts/prepare_emoji_request.py && echo "py_compile_ok" > output/aliyun-emoji/validate.txt
Pass criteria: command exits 0 and output/aliyun-emoji/validate.txt is generated.
output/aliyun-emoji/.Use Emoji when the user wants a fixed-template facial animation clip rather than open-ended video generation.
Use these exact model strings:
emoji-detect-v1emoji-v1Selection guidance:
emoji-detect-v1 first to obtain face_bbox and ext_bbox_face.emoji-v1 only after detection succeeds.DASHSCOPE_API_KEY in your environment, or add dashscope_api_key to ~/.alibabacloud/credentials.model (string, optional): default emoji-detect-v1image_url (string, required)model (string, optional): default emoji-v1image_url (string, required)face_bbox (array<int>, required)ext_bbox_face (array<int>, required)template_id (string, required)task_id (string)task_status (string)video_url (string, when finished)python skills/ai/video/aliyun-emoji/scripts/prepare_emoji_request.py \
--image-url "https://example.com/portrait.png" \
--face-bbox 302,286,610,593 \
--ext-bbox-face 71,9,840,778 \
--template-id emoji_001
output/aliyun-emoji/request.jsonOUTPUT_DIR.references/sources.md