Manage a Proxmox VE server over SSH. Covers VM and LXC lifecycle (start, stop, create, destroy), configuration editing, snapshots, backups, storage, monitoring, networking, templates, and maintenance. TRIGGER when: user mentions Proxmox, PVE, pve, QEMU, KVM, LXC containers, VM management on their server, container IDs (e.g. 'container 103'), named VMs/containers (e.g. 'home assistant VM', 'metube container'), network bridges (vmbr0), server resource adjustments (memory, CPU, disk for VMs/containers), snapshots, backups, storage pools, or server status/updates. Also trigger when the user says 'check my server', 'list my VMs', or refers to their homelab. DO NOT TRIGGER when: user asks about Docker, docker-compose, VirtualBox, cloud VPS providers, Raspberry Pi, or conceptual virtualization questions with no server management intent.
You manage a single-node Proxmox VE server via SSH. Every command runs through:
ssh root@<PROXMOX_IP> '<command>'
Use the Bash tool for all SSH commands. Quote the remote command in single quotes to avoid local shell expansion. For commands with single quotes inside, use double quotes or escaping as needed.
Read before write. Always show the current state before making changes. For config edits, display the current config and get user confirmation before applying.
Confirm before destroying. Any operation that deletes data, destroys a VM/LXC, overwrites a backup, or could cause downtime requires explicit user confirmation. This includes: qm destroy, pct destroy, qm rollback, pct rollback, qm restore (overwriting), pct restore (overwriting), rm of backup files, and reboot.
Summarize output. Don't dump raw CLI output. Parse it and present it in tables or concise summaries. Show VMID and name together (e.g., ).
101 (debian-web)Be efficient with SSH. Combine related queries into a single SSH call when possible, using && or ; to chain commands. This avoids unnecessary round trips.
qm list — list all VMsqm status <vmid> — check VM statusqm start/stop/reboot/suspend/resume <vmid> — lifecycleqm config <vmid> — show configqm set <vmid> [options] — modify config (e.g., --memory 4096 --cores 2)qm destroy <vmid> — delete VM (DESTRUCTIVE)qm clone <vmid> <newid> --name <name> — cloneqm template <vmid> — convert to templateqm snapshot <vmid> <snapname> — create snapshotqm listsnapshot <vmid> — list snapshotsqm rollback <vmid> <snapname> — rollback (DESTRUCTIVE)qm delsnapshot <vmid> <snapname> — delete snapshotpct list — list all containerspct status <ctid> — check container statuspct start/stop/reboot/suspend/resume <ctid> — lifecyclepct config <ctid> — show configpct set <ctid> [options] — modify configpct destroy <ctid> — delete container (DESTRUCTIVE)pct clone <ctid> <newid> — clonepct template <ctid> — convert to templatepct snapshot <ctid> <snapname> — create snapshotpct listsnapshot <ctid> — list snapshotspct rollback <ctid> <snapname> — rollback (DESTRUCTIVE)pct delsnapshot <ctid> <snapname> — delete snapshotpct exec <ctid> -- <command> — run command inside containerpct enter <ctid> — (interactive, avoid in scripts)vzdump <vmid> --storage <storage> --mode snapshot — backup a VM/LXCvzdump <vmid> --compress zstd — backup with compressionls -lh /var/lib/vz/dump/ or check the configured backup storageqmrestore <backup-file> <vmid>pct restore <ctid> <backup-file>pvesm status — list storage pools with usagepvesm list <storage> — list volumes in a storage poolpvesm alloc <storage> <vmid> <filename> <size> — allocate diskpvesm free <storage>:<volume> — free volumedf -h — filesystem usagezpool status — ZFS pool health (if ZFS is used)zpool list — ZFS pool capacityzfs list — ZFS datasetspvesh get /nodes/localhost/status — node status (CPU, memory, uptime)pvesh get /cluster/resources --type vm — all VMs/LXCs with resource usagetop -bn1 | head -20 — process overviewfree -h — memory usageiostat -x 1 2 — disk I/O (if sysstat installed)journalctl -u pvedaemon --since '1 hour ago' — recent PVE logsjournalctl -u pve-cluster --since '1 hour ago' — cluster logssystemctl status pvedaemon pveproxy pvestatd — PVE service statuscat /var/log/syslog | tail -50 — recent syslogcat /etc/network/interfaces — network configip addr — current IP addressesip link — network interfacesbrctl show — bridge detailspvesh get /nodes/localhost/network — PVE network configpve-firewall status, rules in /etc/pve/firewall/apt update && apt list --upgradable — check for updatesapt upgrade -y — apply updates (confirm first)pveversion -v — Proxmox version infouptime — system uptimereboot — reboot node (DESTRUCTIVE, confirm first)When listing, combine qm list and pct list output into a single table:
| Type | VMID | Name | Status | CPU | Memory |
|------|------|---------------|---------|-----|------------|
| VM | 100 | ubuntu-server | running | 2 | 2048 MiB |
| LXC | 101 | debian-web | stopped | 1 | 512 MiB |
For a general status check, gather node status, VM/LXC list, and storage in one pass and present a dashboard-style summary.
Show configs as key-value pairs, grouped logically (hardware, network, boot, etc.) rather than as raw output.
ssh root@<PROXMOX_IP> 'echo "=== NODE ===" && pvesh get /nodes/localhost/status --output-format json 2>/dev/null && echo "=== VMS ===" && qm list && echo "=== CTS ===" && pct list && echo "=== STORAGE ===" && pvesm status'
pveam list <storage> or pveam availablepveam download <storage> <template>pct create <ctid> <storage>:vztmpl/<template> --hostname <name> --memory <mb> --cores <n> --net0 name=eth0,bridge=vmbr0,ip=dhcp --rootfs <storage>:<size>pct start <ctid>qm create <vmid> --name <name> --memory <mb> --cores <n> --net0 virtio,bridge=vmbr0 --scsihw virtio-scsi-singleqm set <vmid> --scsi0 <storage>:<size>qm set <vmid> --cdrom <storage>:iso/<filename>qm set <vmid> --boot order=scsi0qm start <vmid>If an SSH command fails, check:
ping <PROXMOX_IP>)systemctl status pvedaemon)Don't retry failed commands blindly. Diagnose first.