Generate RoboLab task files from natural language descriptions of robot manipulation goals. Use this skill when a user wants to create, write, or generate a new task definition, or asks how to define success conditions, terminations, subtasks, or instructions for a robot manipulation task.
Generate complete RoboLab task files from natural language descriptions.
A task is a Python file containing a Task dataclass that binds a USD scene to a language instruction and termination criteria. Tasks are agnostic to the robot, observation space, and action space.
The references/ directory contains detailed documentation loaded on-demand:
references/conditionals.md — Complete conditional function reference with signatures and parametersreferences/examples.md — Three annotated examples of increasing complexity.usda) already exists with the objects needed for the task. See docs/scene.md for creating scenes.When the user invokes this skill, display the following message verbatim:
I'll help you generate a RoboLab task file. I need a few things:
banana_bowl.usda) if the scene is in robolab/assets/scenes/.usda file if it's elsewheredefault instruction; I'll generate vague and specific variants automatically unless you provide them.robolab/tasks/benchmark/robolab/tasks/<name>/ — give me a folder name and I'll create itHere's an example of what I'll generate — a file called banana_in_bowl_task.py:
# banana_in_bowl_task.py
@configclass
class BananaInBowlTerminations:
time_out = DoneTerm(func=mdp.time_out, time_out=True)
success = DoneTerm(
func=object_in_container,
params={"object": "banana", "container": "bowl", ...},
)
@dataclass
class BananaInBowlTask(Task):
contact_object_list = ["banana", "bowl", "table"]
scene = import_scene("banana_bowl.usda", contact_object_list)
terminations = BananaInBowlTerminations
instruction = {
"default": "Pick up the banana and place it in the bowl",
"vague": "Put the fruit in the bowl",
"specific": "Grasp the yellow banana and place it inside the red bowl on the table",
}
episode_length_s: int = 50
subtasks = [pick_and_place(object=["banana"], container="bowl", logical="all", score=1.0)]
After receiving the user's input:
Check for duplicate task names. Search for existing task classes with the same name across robolab/tasks/ and the user's output directory. If a task with the same class name already exists, warn the user and ask them to choose a different name. Do not overwrite existing task files.
Always ask the user where to save the task on the first invocation. Do not assume a default — you must wait for the user to confirm a directory before writing any file. Offer these options:
robolab/tasks/benchmark/ (existing benchmark tasks)robolab/tasks/<name>/ (a new folder — ask the user for <name>)Once the user has chosen a directory, reuse it for all subsequent tasks in the same session without asking again.
State the termination criterion explicitly before generating the file. For example: "Based on your description, I'm using object_in_container as the success condition — this checks that the object is inside the container and the gripper has released it."
Use the user's instruction text as the "default" instruction variant. Generate "vague" and "specific" variants automatically unless the user explicitly provides them.
Extract the object list from the scene file and the user's description.
Proceed with the Step-by-Step Generation Workflow.
Before generating a task, collect the following:
| Field | Required | Description |
|---|---|---|
| Scene file | Yes | Filename (if in robolab/assets/scenes/) or full path to the USD scene (.usda) |
| Task instruction | Yes | What should the robot accomplish? Becomes the default instruction variant. |
| Episode length | Yes | Max seconds before timeout (simple: 30-50s, multi-object: 60-90s, complex: 90-120s) |
| Success condition | Auto | Inferred from the instruction. Maps to a conditional function. State explicitly to the user. |
| Objects | Auto | Extracted from the scene file and instruction. |
| Subtasks | No | Intermediate checkpoints for progress tracking |
| Attributes | No | Tags for categorization (see Attribute Tags) |
references/conditionals.mdreferences/examples.mddocs/task_libraries.mddocs/environment_registration.mdimport os
from dataclasses import dataclass
import isaaclab.envs.mdp as mdp
from isaaclab.managers import TerminationTermCfg as DoneTerm
from isaaclab.utils import configclass
from robolab.core.scenes.utils import import_scene
from robolab.core.task.conditionals import <CONDITIONAL_IMPORTS>
from robolab.core.task.task import Task
SCENE_DIR = os.path.join(os.path.dirname(__file__), "..", "scenes")
@configclass
class <TaskName>Terminations:
time_out = DoneTerm(func=mdp.time_out, time_out=True)
success = DoneTerm(
func=<conditional_function>,
params={<params_dict>},
)
@dataclass
class <TaskName>Task(Task):
contact_object_list = [<all_object_names>]
scene = import_scene(os.path.join(SCENE_DIR, "<scene_file>.usda"), contact_object_list)
terminations = <TaskName>Terminations
instruction = {
"default": "<clear, natural instruction>",
"vague": "<ambiguous version>",
"specific": "<detailed version with colors, sizes, exact locations>",
}
episode_length_s: int = <seconds>
attributes = [<attribute_tags>]
subtasks = [<optional_subtasks>]
For scenes inside the RoboLab repo, pass just the filename (auto-resolved):
scene = import_scene("banana_bowl.usda", contact_object_list)
For scenes in your own repository, use an absolute path:
SCENE_DIR = os.path.join(os.path.dirname(__file__), "..", "scenes")
scene = import_scene(os.path.join(SCENE_DIR, "my_scene.usda"), contact_object_list)
To auto-extract the contact object list from a scene:
from robolab.core.scenes.utils import import_scene_and_contact_object_list
MyScene, contact_object_list = import_scene_and_contact_object_list("/path/to/scene.usda")
Identify the goal and objects. List all objects the robot may touch, including surfaces like "table".
Choose the termination conditional. Match the success condition to a function (see references/conditionals.md for the full list):
object_in_containerobject_on_topstackedobject_left_ofobject_groups_in_containersobject_outside_ofWrite the terminations class. Always include time_out and success. Set require_gripper_detached=True for any placement condition.
Write instruction variants. Default (clear), vague (ambiguous), specific (detailed). See Instruction Variants.
Decompose into subtasks if multi-step. Use pick_and_place for standard pick-and-place; use Subtask with partial for custom conditions. See Subtasks.
Set episode length. Simple tasks: 30-50s. Multi-object: 60-90s. Complex sorting/stacking: 90-120s.
Choose attributes. Tag based on the skills required. See Attribute Tags.
Assemble the task file. Follow the template above.
Every task should define three instruction variants:
default: Clear, natural language instruction. This is the primary instruction used during evaluation.
"Pick up the banana and place it in the bowl"vague: Ambiguous version that tests semantic understanding. Omit specific object names or use general terms.
"Put the fruit in the bowl"specific: Highly detailed with colors, sizes, materials, and exact locations.
"Grasp the yellow banana and place it inside the red bowl on the wooden table"When using a dict for instructions, omit the type annotation to avoid dataclass mutable-default errors:
# Correct:
instruction = {"default": "...", "vague": "...", "specific": "..."}
# Wrong (will cause dataclass error):