Create RoboTwin benchmark task envs from a task name, task description, or `/robotwin cb ...` request. Use this skill when a user wants to automate creation of `envs/task_name.py`, discover the RoboTwin project in the workspace, verify project health and render readiness, render preview images, and use vision review to catch unreasonable object placement before optional config generation or data collection.
Use this skill when the user wants to create or revise a RoboTwin task environment, especially when they describe it as a benchmark task or use a slash-like command such as /robotwin cb ....
envs/<task_name>.py directly.task_config/<task_name>.yml stub.play_once for real data collection quality.play_once only needs to be structurally reasonable enough for the task file to exist and to express rough intent.code_gen/Use this skill when the user asks for any of the following:
envs/<task_name>.py./robotwin cb [task name] [task description] [other demands].Normalize the request into:
task_nametask_descriptionother_demandsIf the user gives a slash-like command, parse it first and then normalize with scripts/normalize_request.py.
For raw slash-style parsing, prefer quoted task_name and task_description, or use || as a separator between task_name, task_description, and other_demands.
scripts/discover_robotwin.py.scripts/check_robotwin_health.py on the selected root, including render checks.references/task_contract.md.references/fixed_examples.md.envs/<task_name>.py directly. Do not do a repo-wide similarity search for the closest task.scripts/validate_task_env.py.scripts/render_preview.py.scripts/build_preview_montage.py when multiple images exist.references/vision_review_checklist.md.load_actors or related setup logic and repeat the preview loop.task_config/<task_name>.yml with scripts/write_task_config_stub.py.envs/<task_name>.py.task_name.Base_Task.setup_demo, load_actors, play_once, and check_success.setup_demo must call super()._init_task_env_(**kwags).load_actors, initial placement, and check_success.play_once may be approximate and does not need to be data-collection-ready.setup_demo, not after a full successful play_once.warn or fail, keep iterating or stop to ask the user when the issue is preference-sensitive.scripts/normalize_request.pyscripts/discover_robotwin.pyscripts/check_robotwin_health.pyscripts/generate_task_env.pyscripts/validate_task_env.pyscripts/render_preview.pyscripts/build_preview_montage.pyscripts/write_task_config_stub.pyreferences/task_contract.mdreferences/fixed_examples.mdreferences/vision_review_checklist.mdThe usual outputs are:
envs/<task_name>.pytask_config/<task_name>.yml when requested.codex/robotwin_cb/<task_name>/preview_seed_*.png.codex/robotwin_cb/<task_name>/preview_montage.png.codex/robotwin_cb/<task_name>/vision_review.jsonIf you cannot complete render verification, say so clearly and explain whether the blocker is project health, missing assets, or Python/render environment failure.