Write documentation with real, validated examples. Executes commands through the user to capture actual output. Use for any new documentation or major doc updates.
Write accurate, user-focused documentation with real examples by executing operations through the user.
Ask the user what documentation to write. Options:
ALWAYS start with a clean test cluster to ensure reproducible documentation.
Follow the actual docs (docs/setup/mcp-setup.md) - this validates they work.
Claude executes all infrastructure steps directly using Bash tool:
Tear down existing test cluster if present
kind delete cluster --name dot-ai-test 2>/dev/null || true
rm -f ./kubeconfig.yaml
Create fresh Kind cluster with local kubeconfig
kind create cluster --name dot-ai-test --kubeconfig ./kubeconfig.yaml
export KUBECONFIG=./kubeconfig.yaml
Install prerequisites (ingress controller)
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
# Wait for ingress to be ready
kubectl wait --namespace ingress-nginx --for=condition=ready pod --selector=app.kubernetes.io/component=controller --timeout=300s
Follow docs/setup/mcp-setup.md (skip controller if not needed for the feature)
For unreleased features: Build and use local images
If documenting a feature not yet in published charts:
# Build MCP server image
npm run build
docker build -t dot-ai:test .
kind load docker-image dot-ai:test --name dot-ai-test
# Build agentic-tools plugin image
docker build -t dot-ai-agentic-tools:test ./packages/agentic-tools
kind load docker-image dot-ai-agentic-tools:test --name dot-ai-test
Add these flags to helm install:
--set image.repository=dot-ai \
--set image.tag=test \
--set image.pullPolicy=Never \
--set plugins.agentic-tools.image.repository=dot-ai-agentic-tools \
--set plugins.agentic-tools.image.tag=test \
--set plugins.agentic-tools.image.pullPolicy=Never
IMPORTANT: Always use KUBECONFIG=./kubeconfig.yaml for all kubectl/helm commands.
If any step fails or doesn't match existing docs: STOP and discuss whether to update those docs before proceeding.
Why fresh cluster? Ensures documentation examples work from a known clean state and validates setup docs.
Present an outline of sections to write. Example:
1. Overview (what it does, when to use it)
2. Prerequisites
3. Basic Usage (with real examples)
4. Advanced Features
5. API Reference
6. Troubleshooting
7. Next Steps
Get user confirmation on the outline before proceeding.
🚨 CRITICAL: One section at a time. NEVER write multiple sections or the whole doc at once.
For each section:
Key distinction: Infrastructure/setup = Claude runs it. User-facing MCP examples = User runs it and shares output.
NEVER do these:
After all sections are written:
mcp-tools-overview.md if adding a new tool guideTell the user: "Documentation complete. Please review the full file and let me know if any adjustments are needed."
For MCP tool operations:
Please send this intent to your MCP client:
"Ingest this document into the knowledge base: [content] with URI: [url]"
Share the response you receive.
For status checks:
Please ask: "Show dot-ai status"
Share what you see for the Vector DB collections.
For bash commands:
Please run:
kubectl get pods -n dot-ai
Share the output.
--filename not -f, --namespace not -n, --output not -o). Full flags are more self-documenting for users unfamiliar with the tools.docs/guides/mcp-*-guide.mddocs/setup/*.mddocs/guides/mcp-tools-overview.mddocs/img/