CLI toolkit for infrastructure, development, and system administration. Use when Claude needs to execute git operations or GitHub CLI commands, provision AWS/GCP/Azure/DigitalOcean infrastructure, build or debug Docker containers and Kubernetes clusters, connect via SSH or transfer files, query PostgreSQL/MySQL/Redis databases, process JSON with jq or YAML with yq, write shell scripts with error handling, or debug connectivity and performance issues.
This skill provides patterns and best practices for command-line infrastructure management, development workflows, and system administration.
Reference files (load when needed):
references/cli-reference.md - Complete command reference for all toolsreferences/advanced.md - Complex workflows, scripting patterns, automationreferences/troubleshooting.md - Error messages, debugging strategiesHelper scripts in scripts/:
retry.sh - Retry commands with exponential backoffhealth-check.sh - Service health monitoringbackup-postgres.sh - PostgreSQL backup with rotation| Domain |
|---|
| Tools |
|---|
| When to Use |
|---|
| Version Control | git, gh | Code management, PRs, issues |
| Cloud CLIs | aws, gcloud, az, doctl | Infrastructure provisioning |
| Containers | docker, kubectl, helm | Container orchestration |
| Remote Access | ssh, scp, rsync, curl | File transfer, API calls |
| Databases | psql, mysql, redis-cli | Database queries |
| Data Processing | jq, yq, grep, awk | Transform structured data |
# Clone and branch
git clone https://github.com/owner/repo.git && cd repo
git checkout -b feature/new-feature
# Stage, commit, push
git add -p # Interactive staging
git commit -m "type: description"
git push origin feature/new-feature
# Pull request workflow
gh pr create --fill
gh pr list
gh pr checkout 123
gh pr merge 123 --squash --delete-branch
Critical rules:
main, develop)--force-with-lease instead of --force# Build and run
docker build -t myapp:v1 .
docker run -d --name web -p 8080:80 nginx
docker run -it --rm ubuntu:22.04 bash
# Inspect and debug
docker ps -a
docker logs -f container_name
docker exec -it container_name bash
# Cleanup
docker system prune -a --volumes
Critical rules:
latest in production# Context
kubectl config get-contexts
kubectl config use-context my-cluster
# Resources
kubectl get pods -o wide
kubectl describe pod pod-name
kubectl logs -f pod-name
# Apply and scale
kubectl apply -f manifest.yaml
kubectl scale deployment nginx --replicas=3
kubectl rollout undo deployment nginx
# Debug
kubectl get events --sort-by='.lastTimestamp'
kubectl run debug --image=busybox -it --rm -- sh
# Connect
ssh user@hostname
ssh -i ~/.ssh/mykey.pem user@hostname
# Port forwarding
ssh -L 8080:localhost:80 user@server # Local forward
ssh -J jumphost user@target # Jump through bastion
# File transfer (prefer rsync for large transfers)
rsync -avz --progress ./src/ user@host:/dst/
rsync -avz --delete ./src/ dst/ # Mirror
# PostgreSQL
psql -h localhost -U postgres -d dbname
psql -c "SELECT * FROM users LIMIT 10" -d dbname
pg_dump -Fc dbname > backup.dump
# MySQL
mysql -h localhost -u root -p dbname
# Redis (never use KEYS * in production)
redis-cli
SCAN 0 MATCH "user:*" COUNT 100
# Extract and filter
jq '.users[].name' data.json
jq '.users[] | select(.age > 30)' data.json
# Transform
jq '{name: .full_name, email: .contact.email}' data.json
jq '.items | sort_by(.price) | reverse' data.json
# Raw output
jq -r '.name' data.json
# AWS
aws configure
aws s3 sync ./local s3://bucket/dir
aws ec2 describe-instances --query 'Reservations[].Instances[].{ID:InstanceId,State:State.Name}'
# GCP
gcloud auth login
gcloud config set project PROJECT_ID
gcloud compute instances list
# DigitalOcean
doctl auth init
doctl compute droplet list
Critical rules:
# Retry a flaky command up to 5 times
scripts/retry.sh 5 curl -f https://api.example.com/health
# With custom delays
scripts/retry.sh -d 2 -m 30 -v 3 docker pull nginx:latest
# Run default checks
scripts/health-check.sh
# From config file
scripts/health-check.sh checks.conf
# Config format:
# http:https://api.example.com/health:200
# tcp:localhost:5432
# process:nginx
# disk:/:80
# Basic backup
scripts/backup-postgres.sh mydb /backups
# With options
scripts/backup-postgres.sh -h db.example.com -U admin -k 30 mydb /backups
curl -X POST https://api.example.com/data \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN" \
-d '{"key": "value"}' | jq '.'
# Find errors with context
grep -E "error|exception" app.log | tail -50
# Count by type
grep -oE "Error: [^:]*" app.log | sort | uniq -c | sort -rn
# PostgreSQL
pg_dump -Fc dbname > backup_$(date +%Y%m%d).dump
# Sync to remote
rsync -avz /data/ backup-server:/backups/
Load references/cli-reference.md when:
Load references/advanced.md when:
Load references/troubleshooting.md when:
Core tools typically pre-installed: git, ssh, curl, grep, awk
Install via apt:
apt install jq postgresql-client mysql-client redis-tools docker.io htop
Cloud CLIs:
# AWS
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip && sudo ./aws/install
# GitHub CLI
apt install gh