Reachy Mini guidance for building LLM-driven, voice-enabled, and autonomous robot apps with clean tool-to-motion separation.
Primary reference: ~/reachy_mini_resources/reachy_mini_conversation_app/
If this folder doesn't exist, run
skills/reachy-mini-setup-environment.mdto clone reference apps.
This app demonstrates how to turn Reachy Mini into an intelligent, autonomous robot using LLMs.
The LLM doesn't control motors directly. Instead:
LLM decides → Tool call (e.g., "dance") → Queue move → Control loop executes smoothly
This separation:
The conversation app exposes these tools to the LLM:
| Tool | What it does |
|---|---|
move_head | Queue head pose changes |
dance | Play dances from library |
play_emotion | Play recorded emotions |
camera | Capture and analyze images |
head_tracking | Enable/disable face tracking |
Different personalities with different instructions and enabled tools. Profiles let you create variations (helpful assistant, playful character, etc.) without code changes.
Two options for image analysis (e.g., "what do you see?"):
Basic pattern for a tool that queues robot actions:
from queue import Queue
class RobotController:
def __init__(self):
self.move_queue = Queue()
# This is called by the LLM
def tool_move_head(self, yaw: float, pitch: float, duration: float = 1.0):
"""Move the robot's head to a position."""
self.move_queue.put({
"type": "goto",
"yaw": yaw,
"pitch": pitch,
"duration": duration
})
return "Head movement queued"
# This runs in the control loop
def process_queue(self, mini):
while not self.move_queue.empty():
move = self.move_queue.get()
if move["type"] == "goto":
pose = create_head_pose(yaw=move["yaw"], pitch=move["pitch"], degrees=True)
mini.goto_target(head=pose, duration=move["duration"])
The conversation app uses OpenAI's Realtime API (compatible with Grok and others):
Key files to study:
src/reachy_mini_conversation_app/realtime/ - API integrationsrc/reachy_mini_conversation_app/tools/ - Tool definitionssrc/reachy_mini_conversation_app/profiles/ - Personality profiles