Find and check project name availability across domains and package registries using the `available` CLI tool. Use this skill whenever the user wants to brainstorm or generate project names, check if a specific name is available (domains, npm, crates.io, PyPI, etc.), find a name for a new tool/library/app, or assess domain and package registry availability for any name. Also trigger when the user says things like "what should I call this", "is this name taken", "find me a name for", "check if X is available", or is starting a new project and needs naming help.
Find memorable, available project names by combining AI generation with real-time domain and package registry checks.
The user describes what their project does. The tool uses LLMs to brainstorm names, then checks each one against domains and package registries.
The user already has candidate names. The tool skips LLM generation and goes straight to availability checking — no API keys needed for this mode.
Always pass --json so you can parse and present results cleanly.
available "description of the project" --json
available --check "name1,name2,name3" --json
The tool checks 3 TLDs (.com, .dev, .io) and 10 package registries by default. These defaults give the best results — do NOT pass --tlds, --registries, or --models unless the user specifically asks to customize them. Passing these flags limits the check to only what you specify, which produces worse results.
Optional flags (only if the user requests):
--max-names 30 — generate more candidates (default: 20)--tlds com,dev,io,org — override which TLDs to check--registries npm,crates.io — override which registries to check (limits to ONLY these)--models claude-opus-4-6 — pick specific LLM modelsThe tool outputs structured JSON to stdout (progress/warnings go to stderr):
{
"results": [
{
"name": "rushq",
"score": 0.8,
"suggested_by": ["claude-opus-4-6"],
"domains": {
"available": 3,
"registered": 0,
"unknown": 0,
"total": 3,
"details": [
{"domain": "rushq.com", "available": "available"},
{"domain": "rushq.dev", "available": "available"},
{"domain": "rushq.io", "available": "available"}
]
},
"packages": {
"available": 9,
"taken": 1,
"unknown": 0,
"total": 10,
"details": [
{"registry": "crates.io", "available": "available"},
{"registry": "npm", "available": "taken"}
]
}
}
],
"models_used": ["claude-opus-4-6"],
"errors": []
}
Key fields:
score: 0.0–1.0 availability score (higher is better). Weights: .com = 30%, other TLDs = 10% each, package registries = 50% split evenly.domains.details[].available: "available", "registered", or "unknown"packages.details[].available: "available", "taken", or "unknown"errors: Per-model generation failures (only in generate mode)After running the command and parsing JSON, present results to the user in a clear summary. Focus on the top candidates (score >= 0.7) and highlight what's actually available vs taken.
For each top result, show:
Group results into tiers:
If the user asked about specific names, give a direct answer for each: available or not, and where the conflicts are.
If errors is non-empty, mention which models failed but don't make it the focus — the results from other models are still valid. If all models fail in generate mode, tell the user to check their API keys.
ANTHROPIC_API_KEY, OPENAI_API_KEY, GOOGLE_API_KEY, or XAI_API_KEY) for generate mode. Check mode works without any keys.--check rather than generating — it's faster and doesn't need API keys.cargo install --git https://github.com/brad/available (adjust URL as needed).