Lock character identity, assign @Tag references, and maintain face and hand consistency across multi-character scenes in Seedance 2.0. Covers 360-degree consistency testing and first-frame art direction for image-to-video. Use when a character changes appearance between shots, when building multi-person scenes, or when hands or faces are distorting.
Character fidelity, identity anchoring, and first-frame art direction for Seedance 2.0.
Write once. Reuse across all prompts for this character. Never change nouns mid-project.
[Name]: [age range], [build], [skin tone], [hair style/color],
[defining features], [wardrobe description], [emotional energy].
Example:
Maya: woman mid-30s, lean build, warm brown skin, short natural hair,
sharp eyes, leather jacket over white tank, calm and precise energy.
For I2V and R2V, always assign the character reference explicitly:
@Image1's character as the subject
@Image1 for facial features and clothing
Use @Image1 and @Image2 for the character's appearance from multiple angles
A bare @Image1 with no role instruction is weak.
For two characters, use separate identity anchors:
Character A references @Image1.
Character B references @Image2.
Character A throws a right punch at Character B.
Character B blocks with crossed arms.
Attribute every action by name. Never use ambiguous pronouns in multi-character prompts.
Upload character body and prop/weapon as separate references:
Character appearance references @Image1.
Weapon design references @Image2.
This prevents the model from blending weapon details into the character's body geometry.
If hands are not essential to the action: frame waist-up or specify "hands not in frame."
If hands are essential: specify one simple action only.
✅ picks up the glass with right hand
✅ places hand flat on the table
✅ open palm facing camera
❌ intricate finger gestures
❌ typing on keyboard (close-up)
Before committing to a character reference, generate the same character from multiple angles (front, side, three-quarter, back). Place results side by side.
If identity holds across all angles → the reference is production-ready. If identity drifts at any angle → improve the reference or generate from a better image.
The first frame is the primary creative contract for I2V. Everything follows from it.
| In the first-frame image | In the prompt |
|---|---|
| Character identity + costume | Motion (what changes) |
| Pose at start of action | Timing (when things happen) |
| Camera angle + lighting | Camera movement (how frame evolves) |
| Environment composition | Sound |
| Color palette | Constraints |
❌ Wrong lighting direction → forces re-light → causes flicker
❌ Character mid-action → no room for motion in prompt
❌ Complex cluttered background → warp during camera movement
❌ Low resolution → model loses detail for consistency
Real human face references require identity verification on the Dreamina platform. Use AI-generated portraits or illustrated/3D character references instead. See [skill:seedance-prompt] for content policy.
"@Image1's character as the subject" and reduce motion complexity.