Interactive hiring wizard to set up a new AI team member. Guides the user through role design via conversation, generates agent identity files, and optionally sets up performance reviews. Use when the user wants to hire, add, or set up a new AI agent, team member, or assistant. Triggers on phrases like "hire", "add an agent", "I need help with X" (implying a new role), or "/hire".
Set up a new AI team member through a guided conversation. Not a config generator - a hiring process.
User says something like:
/hireQ1: "What do you need help with?" Let them describe the problem, not a job title. "I'm drowning in code reviews" beats "I need a code reviewer."
Q2: "What's their personality? Formal, casual, blunt, cautious, creative?" Or frame it as: "If this were a human colleague, what would they be like?"
Q3: "What should they never do?" The red lines. This is where trust gets defined.
After Q1-Q3, assess whether anything is ambiguous or needs clarification. If so, ask ONE follow-up question tailored to what's unclear. Examples:
If Q1-Q3 were clear enough, skip Q4 entirely.
After the interview, present a summary:
🎯 Role: [one-line description]
🧠 Name: [suggested name from naming taxonomy]
🤖 Model: [selected model] ([tier])
⚡ Personality: [2-3 word vibe]
🔧 Tools: [inferred from conversation]
🚫 Boundaries: [key red lines]
🤝 Autonomy: [inferred level: high/medium/low]
Then ask: "Want to tweak anything, or are we good?"
Before finalizing, select an appropriate model for the agent.
Run openclaw models list or check the gateway config to see what's configured.
Map discovered models to capability tiers:
| Tier | Models (examples) | Best for |
|---|---|---|
| reasoning | claude-opus-, gpt-5, gpt-4o, deepseek-r1 | Strategy, advisory, complex analysis, architecture |
| balanced | claude-sonnet-*, gpt-4-turbo, gpt-4o-mini | Research, writing, general tasks |
| fast | claude-haiku-, gpt-3.5, local/ollama | High volume, simple tasks, drafts |
| code | codex-, claude-sonnet-, deepseek-coder | Coding, refactoring, tests |
Use pattern matching on model names - don't hardcode specific versions.
Based on the interview:
Pick the best available model for the role. In the summary card, add:
🤖 Model: [selected model] ([tier] - [brief reason])
If multiple good options exist or you're unsure, ask: "For a [role type] role, I'd suggest [model] (good balance of capability and cost). Or [alternative] if you want [deeper reasoning / faster responses / lower cost]. Preference?"
After the summary is confirmed, offer:
"Want to set up periodic performance reviews?"
Onboarding assignment (if relevant to the role)
Create an agent directory at agents/<name>/ with:
../../USER.md (they need to know who they work for)../../MEMORY.md (shared team context)Mention to the user: "I've linked USER.md and MEMORY.md so they know who you are and share team context. You can change this later if you want them more isolated."
Use craft/role-based names. Check TOOLS.md for the full naming taxonomy:
Check existing agents to avoid name conflicts. Suggest a name that fits the role, but let the user override.
Before generating, check agents/ for existing team members. Note:
Mention any relevant observations: "You already have Scout for research - this new role would focus specifically on..."
{
"id": "<name>",
"workspace": "/path/to/clawd/agents/<name>",
"model": "<selected-model>"
}
Or guide them to run openclaw agents add