ZFS administration on Bluefin — pools, datasets, snapshots, and delivery options for an immutable host that does not ship ZFS kernel modules.
ZFS is NOT in the Fedora kernel. The CDDL/GPL license incompatibility means OpenZFS is not and will never be in the mainline Linux kernel. Bluefin is based on Fedora — it does not ship ZFS kernel modules out of the box.
Any ZFS usage on Bluefin requires one of three approaches documented below.
Run ZFS tools inside a Fedora Distrobox with DKMS. Keeps the host clean and
doesn't risk breaking on bootc upgrade rebase.
Install the ZFS kernel module on the host via COPR. Works, but carries risk:
the DKMS module may not survive a bootc upgrade rebase if the kernel version
changes.
Run TrueNAS or an OpenZFS server externally. Mount shares on Bluefin via NFS or SMB. The host stays completely clean.
# Create a Fedora container
distrobox create --name zfs-box --image registry.fedoraproject.org/fedora:latest
# Enter and install ZFS
distrobox enter zfs-box
sudo dnf install -y https://zfsonlinux.org/fedora/zfs-release-2-5$(rpm --eval "%{dist}").noarch.rpm
sudo dnf install -y zfs
sudo modprobe zfs
# Verify
zpool list
If you need to create or import pools that require direct disk access, recreate the container with elevated privileges:
distrobox create --name zfs-box \
--image registry.fedoraproject.org/fedora:latest \
--additional-flags "--privileged --volume /dev:/dev"
Note: Privileged mode gives the container full access to host devices. Only use this when you need to manage actual disks from inside the container.
# Enable the ZFS COPR repository
sudo rpm-ostree install \
https://zfsonlinux.org/fedora/zfs-release-2-5$(rpm --eval "%{dist}").noarch.rpm
# Install ZFS and the DKMS build tooling
sudo rpm-ostree install zfs zfs-dkms
# Reboot to activate the layered packages
sudo systemctl reboot
# After reboot, load the kernel module
sudo modprobe zfs
# Verify
zpool list
⚠️ Risk: After
bootc upgrade, the DKMS module may not rebuild automatically if the kernel version changes. You may need to reinstall the DKMS module after a rebase. See Troubleshooting below.
Mount a TrueNAS or OpenZFS share without touching the host kernel:
# NFS mount (one-time)
sudo mkdir -p /mnt/nas
sudo mount -t nfs 192.168.1.100:/mnt/pool/share /mnt/nas
# Persistent NFS mount — add to /etc/fstab
192.168.1.100:/mnt/pool/share /mnt/nas nfs defaults,_netdev 0 0
# SMB/CIFS mount (one-time)
sudo mount -t cifs //192.168.1.100/share /mnt/nas \
-o username=user,password=pass,uid=$(id -u),gid=$(id -g)
This is the lowest-risk option: the host never loads a ZFS module, and upgrades/rebases have no impact on your storage access.
# Create a single-disk pool
sudo zpool create mypool /dev/sdX
# Create a mirrored pool (RAID-1 equivalent)
sudo zpool create mypool mirror /dev/sdX /dev/sdY
# Create a RAIDZ pool (RAID-5 equivalent)
sudo zpool create mypool raidz /dev/sdX /dev/sdY /dev/sdZ
# Check pool health
sudo zpool status
sudo zpool list
# Import an existing pool (e.g., after moving disks)
sudo zpool import # list all importable pools
sudo zpool import mypool # import by name
# Export a pool (safe unmount before moving disks)
sudo zpool export mypool
# Scrub (integrity check)
sudo zpool scrub mypool
# Create datasets
sudo zfs create mypool/data
sudo zfs create mypool/backups
sudo zfs create mypool/vms
# List datasets
sudo zfs list
# Set common properties
sudo zfs set compression=lz4 mypool/data
sudo zfs set atime=off mypool/data
sudo zfs set quota=100G mypool/data
sudo zfs set recordsize=1M mypool/vms # better for large sequential I/O
# Get a property
sudo zfs get compression mypool/data
# Set mountpoint
sudo zfs set mountpoint=/mnt/data mypool/data
# Take a snapshot
sudo zfs snapshot mypool/data@backup-2024-01-01
# List snapshots
sudo zfs list -t snapshot
# Roll back to a snapshot (destroys data written after the snapshot)
sudo zfs rollback mypool/data@backup-2024-01-01
# Destroy a snapshot
sudo zfs destroy mypool/data@backup-2024-01-01
# Clone a snapshot into a new dataset
sudo zfs clone mypool/data@backup-2024-01-01 mypool/data-clone
# Full send to another pool
sudo zfs send mypool/data@snap1 | sudo zfs receive backuppool/data
# Incremental send (only changes since snap1)
sudo zfs send -i mypool/data@snap1 mypool/data@snap2 | \
sudo zfs receive backuppool/data
# Send over SSH to a remote host
sudo zfs send mypool/data@snap1 | \
ssh backup-host sudo zfs receive backuppool/data
sanoid manages automatic snapshots and retention policies via a config file. syncoid handles replication.
# Inside Distrobox (Option A) — recommended
distrobox enter zfs-box
sudo dnf install -y sanoid
# On host (Option B)
sudo rpm-ostree install sanoid
# /etc/sanoid/sanoid.conf
[mypool/data]
use_template = production
[mypool/backups]
use_template = backup
[template_production]
frequently = 0
hourly = 24
daily = 30
monthly = 3
yearly = 0
autosnap = yes
autoprune = yes
[template_backup]
frequently = 0
hourly = 0
daily = 90
monthly = 12
yearly = 1
autosnap = no
autoprune = yes
# Manual run
sudo sanoid --take-snapshots --verbose
# Prune old snapshots
sudo sanoid --prune-snapshots --verbose
# Automate with a systemd timer or cron
# cron example — every 15 minutes
*/15 * * * * root /usr/sbin/sanoid --take-snapshots --quiet
# Enable ZFS services so pools auto-import and datasets auto-mount at boot
sudo systemctl enable --now zfs-import-cache zfs-mount zfs-share
# Manual mount all datasets in a pool
sudo zfs mount -a
sudo modprobe zfs
# If the module is missing (kernel update broke DKMS build):
sudo dkms install zfs/$(modinfo -F version zfs 2>/dev/null || echo "2.2") \
-k $(uname -r)
sudo zpool import # list all importable pools
sudo zpool import -f mypool # force import by name
sudo zgenhostid # generate a new /etc/hostid
sudo zpool import -f mypool
Recreate the Distrobox with device access:
distrobox create --name zfs-box \
--image registry.fedoraproject.org/fedora:latest \
--additional-flags "--privileged --volume /dev:/dev"
sudo zpool status -v mypool # verbose — shows which vdev is faulted
sudo zpool scrub mypool # start integrity scrub
sudo zpool clear mypool # clear error counters after replacing a disk
sudo zpool replace mypool /dev/sdX /dev/sdY # replace old with new disk
sudo zpool status mypool # watch resilver progress
| Scenario | Recommendation |
|---|---|
| Homelab NAS (TrueNAS/OpenZFS server) | Option C — external NAS + NFS/SMB |
| ZFS tools without host risk | Option A — Distrobox |
| Maximum performance on attached disks | Option B — host DKMS (with caveats) |
| Simple secondary disk, no ZFS features needed | Use btrfs or ext4 instead |
| OS/root filesystem | Not applicable — Bluefin uses btrfs |