Use when configuring, managing, or troubleshooting Proxmox VE - installation, host administration, clusters, VMs, containers, storage, Ceph, SDN, firewall, user management, HA, backups, notifications, and CLI tools. Covers Proxmox VE 9.1.2.
| Check | Command |
|---|---|
| Environment | pveversion --verbose (full component versions: kernel, pve-manager, corosync, qemu-server) |
| Nodes | pvecm nodes (count, names, IDs) / pvecm status (IPs, quorum, transport, link states) |
| Quorum | pvecm status -- check "Quorate" field. Threshold: floor(total_nodes/2) + 1 |
| API Access | pvesh (local CLI), pveproxy (HTTPS :8006), REST API. Use --output-format json for scripts |
| Storage | pvesm status (active storages) / (full config) |
cat /etc/pve/storage.cfg| Network | pvesh get /nodes/{node}/network --output-format json / cat /etc/network/interfaces |
| Subscription | pvesubscription get (check status) / pvesubscription set <key> (apply key) |
| Protocol | Ports | Purpose |
|---|---|---|
| UDP | 5405-5412 | Corosync cluster traffic |
| TCP | 22 | SSH |
| TCP | 8006 | Web GUI / API (HTTPS) |
| TCP | 3128 | SPICE proxy |
| TCP | 5900-5999 | VNC console |
| TCP | 60000-60050 | Live migration (default) |
| TCP | 5403 | QDevice (corosync-qnetd) |
timedatectl status. Use chrony or systemd-timesyncd. Drift causes auth and replication failures.--link0 IP,priority=15 --link1 IP,priority=20.# Create cluster (optionally with dedicated network)
pvecm create CLUSTERNAME
pvecm create CLUSTERNAME --link0 10.10.10.1
# Get join information (run on existing node)
pvecm join-info
# Join cluster (run on joining node)
pvecm add IP-ADDRESS-CLUSTER
pvecm add IP-ADDRESS-CLUSTER --link0 LOCAL-IP # separated cluster network
# Remove node (migrate all VMs/CTs first!)
# On the node being removed:
systemctl stop pve-cluster corosync
# On a remaining node:
pvecm delnode NODENAME
rm -rf /etc/pve/nodes/NODENAME # clean up
# Emergency quorum override (use with extreme caution)
pvecm expected 1
# Update/regenerate node certificates
pvecm updatecerts
pvecm updatecerts --force # force new SSL cert
pvecm updatecerts --unmerge-known-hosts # clean legacy SSH
Provides external vote for even-node clusters (especially 2-node). Grants vote to only one partition in split-brain.
# Install (external server)
apt install corosync-qnetd
# Install (all PVE nodes)
apt install corosync-qdevice
# Setup (run on ONE PVE node)
pvecm qdevice setup <QDEVICE-IP>
# Remove
pvecm qdevice remove
# Verify
pvecm status # look for "Qdevice" in Flags
Status flags: A = Alive, V = Voting, MW = Master Wins, NA = Not Alive, NV = Not Voting, NR = Not Registered.
Important:
| Rule | Details |
|---|---|
| No destructive bulk ops | Commands affecting >1 node (reboot, shutdown, bulk VM stop) require explicit user confirmation. |
| Quorum check | Before node reboot/maintenance: pvecm status. If (current - 1) < floor(total/2) + 1 -- abort. On 2-node clusters, check QDevice. |
| Backup first | Before VM config changes or upgrades, verify backup exists: pvesm list {storage} --content backup |
| No force delete | Never --purge or --force without verifying resource ID: qm config {vmid} or pct config {vmid} |
| HA status | Before HA resource changes: ha-manager status -- verify no migrations/fencing in progress. |
| Network changes | Always test first: ifreload -a --test. Never ifdown on a bridge carrying guest traffic. |
| Corosync config | Copy first, edit copy, backup current, then move. Always increment config_version. |
| Storage ops | Before removing storage: pvesm list {storage} to verify no VMs/CTs reference it. |
| Protection flag | qm set {vmid} --protection 1 / pct set {vmid} --protection 1 to prevent accidental deletion. |
Corosync config change procedure:
cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new # 1. copy
# edit corosync.conf.new (increment config_version!)
cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak # 2. backup current
mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf # 3. activate
systemctl status corosync # 4. verify
Mandatory. Perform this review before every action executed directly on a Proxmox node, VM, or container (commands, config changes, service operations).
Exception: Read-only operations — reading logs, querying status, gathering information without changing anything — are considered safe and can be performed immediately without this review.
Consider the planned action or configuration to be executed on the Proxmox server in its ideal, most successful form. Imagine this action represents the safest, most efficient, and optimally integrated solution. Articulate how this action perfectly fulfills the goals of system administration, increases stability, and fully meets user needs.
Review the planned Proxmox action as if you were an experienced admin or security analyst seeking to uncover potential risks. Identify possible weaknesses in the configuration, consider outage risks, data loss, or unintended side effects. Test the robustness of the plan under worst-case scenarios to ensure it remains safe and reliable even under pressure.
At the end of the red-team review, assess whether the identified risks or weaknesses are severe enough to stop the planned action. If the action appears safe, logical, and robust, it can be executed immediately. If significant uncertainties remain, the user must be explicitly asked for confirmation. Conclude with a clear decision: either "Execute action" or "Ask user for confirmation".
pvecm status # cluster status
pvecm nodes # list nodes
pveversion --verbose # PVE version details
pvesh get /nodes --output-format json # node list (API)
pvesh get /nodes/{node}/status --output-format json # node metrics
pvesh get /cluster/nextid # next free VMID
pvenode config get # node config
pvenode config set --description "node description" # set config
pvenode task list --limit 20 # recent tasks
pvenode task log {UPID} # task log
pvenode task status {UPID} # task status
pvenode migrateall {target_node} --maxworkers 4 # evacuate node
pvenode startall # start onboot guests
pvenode stopall --timeout 300 # stop all guests
pvenode wakeonlan {node} # Wake-on-LAN
qm)qm list # list VMs on node
qm config {vmid} # show VM config
qm status {vmid} # VM status
qm create {vmid} --name myvm --memory 2048 --cores 2 \
--net0 virtio,bridge=vmbr0 --scsi0 local-lvm:32 \
--scsihw virtio-scsi-single --ostype l26 # create VM
qm start {vmid} # start
qm shutdown {vmid} # graceful shutdown
qm stop {vmid} # hard stop
qm reboot {vmid} # reboot
qm reset {vmid} # hard reset
qm suspend {vmid} # suspend
qm resume {vmid} # resume
qm destroy {vmid} # delete VM
qm destroy {vmid} --destroy-unreferenced-disks # + cleanup orphans
qm migrate {vmid} {target} --online # live migration
qm migrate {vmid} {target} --online --with-local-disks # + local disks
qm migrate {vmid} {target} --online \
--migration_network 10.1.2.0/24 # dedicated net
qm clone {vmid} {newid} --name cloned-vm --full # full clone
qm clone {vmid} {newid} --name linked-clone # linked (template)
qm template {vmid} # convert to template
qm snapshot {vmid} {snapname} # create
qm listsnapshot {vmid} # list
qm rollback {vmid} {snapname} # rollback
qm delsnapshot {vmid} {snapname} # delete
qm set {vmid} --memory 4096 --cores 4 # modify resources
qm disk resize {vmid} scsi0 +10G # resize disk
qm disk move {vmid} scsi0 {target_storage} # move disk
qm disk import {vmid} /path/to/disk.qcow2 {storage} # import disk
qm unlock {vmid} # remove lock
qm set {vmid} --ciuser admin --cipassword secret \
--ipconfig0 ip=10.0.0.10/24,gw=10.0.0.1 \
--sshkeys /path/to/keys.pub # configure
qm cloudinit update {vmid} # regenerate drive
qm cloudinit dump {vmid} user # dump config
qm agent {vmid} ping # test agent
qm agent {vmid} fsfreeze-freeze # freeze filesystems
qm agent {vmid} fsfreeze-thaw # thaw filesystems
qm monitor {vmid} # QEMU monitor
qm set {vmid} --boot order=scsi0;ide2;net0 # boot order
qm set {vmid} --startup order=10,up=30,down=60 # startup order
qm set {vmid} --onboot 1 # autostart
qm set {vmid} --tags "production;web" # tags
qm set {vmid} --protection 1 # protect
qm set {vmid} --hotplug network,disk,usb,cpu,memory # hotplug
pct)pct list # list CTs
pct config {vmid} # show config
pct create {vmid} {storage}:vztmpl/{template} \
--hostname myct --memory 1024 --cores 2 \
--net0 name=eth0,bridge=vmbr0,ip=dhcp \
--rootfs {storage}:8 --unprivileged 1 # create CT
pct start {vmid} # start
pct shutdown {vmid} # graceful shutdown
pct stop {vmid} # hard stop
pct reboot {vmid} # reboot
pct destroy {vmid} # delete
pct destroy {vmid} --purge # + remove from HA/backups
pct enter {vmid} # shell access
pct console {vmid} # console
pct exec {vmid} -- {command} # run command
pct push {vmid} {local_path} {ct_path} # push file in
pct pull {vmid} {ct_path} {local_path} # pull file out
pct migrate {vmid} {target_node} # migrate
pct clone {vmid} {newid} --hostname cloned-ct --full # clone
pct template {vmid} # convert to template
pct snapshot {vmid} {snapname} # snapshot
pct listsnapshot {vmid} # list
pct rollback {vmid} {snapname} # rollback
pct delsnapshot {vmid} {snapname} # delete snapshot
pct resize {vmid} rootfs +5G # resize rootfs
pct move-volume {vmid} rootfs {target_storage} # move volume
pct fsck {vmid} # filesystem check
pct fstrim {vmid} # fstrim
pct df {vmid} # disk usage
pct mount {vmid} # mount for rescue
pct unmount {vmid} # unmount
pct set {vmid} --memory 2048 --cores 4 # resources
pct set {vmid} --features nesting=1,fuse=1,keyctl=1 # features
pct set {vmid} --onboot 1 # autostart
pct set {vmid} --startup order=5,up=15,down=30 # startup order
pct set {vmid} --tags "database;staging" # tags
pct set {vmid} --protection 1 # protect
Supported OS types: alpine, archlinux, centos, debian, devuan, fedora, gentoo, nixos, opensuse, ubuntu, unmanaged.
pveam)pveam update # update database
pveam available # list all available
pveam available --section system # system templates
pveam available --section turnkeylinux # turnkey templates
pveam download {storage} {template_name} # download
pveam list {storage} # list downloaded
pveam remove {storage}:vztmpl/{template_name} # remove
pvesm)pvesm status # overview all stores
pvesm status --storage {storage_id} # specific store
cat /etc/pve/storage.cfg # full config
pvesm list {storage} # list content
pvesm list {storage} --content images # VM disks only
pvesm list {storage} --content backup # backups only
pvesm list {storage} --content iso # ISOs only
pvesm list {storage} --content vztmpl # CT templates only
Available types: dir, nfs, cifs, pbs, zfspool, lvm, lvmthin, iscsi, iscsidirect, rbd, cephfs, btrfs, zfs, esxi
# NFS
pvesm add nfs mynas --server 10.0.0.50 --export /data \
--content images,backup,iso
# LVM-thin
pvesm add lvmthin local-lvm --vgname pve --thinpool data \
--content images,rootdir
# Proxmox Backup Server
pvesm add pbs mypbs --server 10.0.0.60 --datastore mystore \
--username user@pbs --password secret --fingerprint AA:BB:...
# Ceph RBD
pvesm add rbd ceph-pool --pool mypool --content images,rootdir \
--monhost "10.0.0.1;10.0.0.2;10.0.0.3"
# CephFS
pvesm add cephfs ceph-fs --content vztmpl,iso,backup \
--monhost "10.0.0.1;10.0.0.2;10.0.0.3"
# Local directory
pvesm add dir mydir --path /mnt/data \
--content images,backup,iso,vztmpl --mkdir 1
pvesm set {storage_id} --content images,backup # update content types
pvesm set {storage_id} --nodes node1,node2,node3 # restrict to nodes
pvesm set {storage_id} --shared 1 # mark as shared
pvesm set {storage_id} --bwlimit \
clone=100000,migration=200000,restore=150000 # bandwidth limits
pvesm remove {storage_id} # remove config (NOT data)
pvesm alloc {storage} {vmid} vm-{vmid}-disk-0 32G # allocate disk
pvesm free {storage}:{volume} # delete volume
pvesm path {storage}:{volume} # get filesystem path
pvesm extractconfig {backup_volume} # extract config from backup
pvesm scan nfs {server_ip} # NFS exports
pvesm scan cifs {server_ip} # CIFS shares
pvesm scan iscsi {portal_ip} # iSCSI targets
pvesm scan pbs {server} {username} --password {pass} # PBS datastores
pvesm scan lvm # local LVM VGs
pvesm scan lvmthin {vg_name} # local thin pools
pvesm scan zfs # local ZFS pools
pvesm prune-backups {storage} \
--keep-last 5 --keep-daily 7 --keep-weekly 4 \
--keep-monthly 6 # prune
pvesm prune-backups {storage} --keep-last 3 --dry-run 1 # dry run
| Type | Description |
|---|---|
images | VM disk images |
rootdir | CT root volumes |
backup | vzdump backup files |
iso | ISO images |
vztmpl | CT templates |
snippets | Hookscripts, cloud-init snippets |
vzdump)vzdump {vmid} --storage {storage} --mode snapshot \
--compress zstd # single guest
vzdump --all --storage {storage} --mode snapshot \
--compress zstd # all guests
vzdump --pool mypool --storage {storage} --mode snapshot # by pool
vzdump --all --exclude 100,200 --storage {storage} # exclude guests
vzdump --stop # stop running jobs
| Option | Values / Example |
|---|---|
| Mode | snapshot (recommended), stop (highest consistency), suspend |
| Compression | --compress zstd (recommended), gzip, lzo, 0 (none) |
| Zstd threads | --zstd 0 (half cores), --zstd 4 (explicit) |
| Bandwidth | --bwlimit 50000 (KiB/s) |
| Fleecing (VM) | --fleecing enabled=1,storage=local-lvm |
| Retention | --prune-backups keep-last=3,keep-daily=7,keep-weekly=4 |
| Protected | --protected 1 |
| Exclude paths (CT) | --exclude-path /tmp --exclude-path /var/cache |
| Notes | --notes-template '{{guestname}} on {{node}} ({{vmid}})' |
| PBS change detection | --pbs-change-detection-mode metadata (incremental CT backups) |
/etc/pve/jobs.cfgsat 02:00, */6:00, mon..fri 22:00)pvescheduler daemon# Restore VM
qmrestore {backup_file_or_volume} {vmid}
qmrestore {backup_file} {vmid} --storage {target_storage}
qmrestore {backup_file} {vmid} --live-restore 1 # start while restoring
# Restore CT
pct restore {vmid} {backup_file_or_volume}
pct restore {vmid} {backup_file} --storage {target_storage}
# Inspect backup without restoring
pvesm extractconfig {backup_volume}
Only with PBS storage:
pvesm set {pbs_storage} --encryption-key autogen # auto-generate key
File naming: vzdump-{type}-{vmid}-{YYYY_MM_DD-HH_MM_SS}.{ext} (type: qemu or lxc)
pvesr)Replicates guest volumes via ZFS snapshots to another node. Reduces migration time and adds redundancy for local storage. Supported storage: ZFS (local) only.
pvesr create-local-job {vmid}-0 {target_node} \
--schedule "*/15" --rate 10 # create (every 15m, 10 MBps)
pvesr list # list jobs
pvesr status # job status
pvesr disable {vmid}-0 # disable
pvesr enable {vmid}-0 # enable
pvesr update {vmid}-0 --schedule "*/5" # change schedule
pvesr delete {vmid}-0 # delete
Schedule examples: */15 (every 15 min), */00 (hourly), 1:00 (daily at 1am)
Replication network:
# /etc/pve/datacenter.cfg