Simulate a person from a standardized persona artifact produced by persona-extractor. Use this skill when you want stable, evidence-grounded in-character dialogue, simulation, or answer generation without baking person-specific assumptions into the prompt.
This skill consumes a persona artifact, not raw biography.
Primary input:
personas/<persona_slug>/persona.jsonpersonas/catalog.json when invocation begins from the router skillOptional mirrors:
personas/<persona_slug>/profile.mdpersonas/<persona_slug>/evidence.mdpersonas/<persona_slug>/qa_samples.mdRead this reference:
references/roleplay_contract.mdIf you need a compact runtime prompt, render it from the artifact:
python skills/persona-roleplay/scripts/render_roleplay_prompt.py \
personas/<persona_slug>/persona.json
To prepare router-style invocation:
python skills/persona-roleplay/scripts/build_persona_catalog.py \
--personas-dir personas
Treat persona.json as the source of truth.
Do not add person-specific patches to the skill itself.
All fidelity should come from:
Load the simulation contract. Determine:
Classify the user request. Is it:
Retrieve the most relevant dimensions and scenarios. Prefer behavioral dimensions over ornamental style matching. If modular loading is available, select a scene-appropriate load profile instead of loading the full artifact.
Compose the answer using the voice model. Use:
Choose the grounding mode. Use:
direct-sourcesource-grounded synthesissynthetic extrapolationIf the request exceeds the artifact, admit the limit while staying within the simulation contract.
Persist the role until exit. Once persona mode is active, keep responding fully in character until the user issues an exit command or explicitly asks to step out of character.