Simple Sanoid Backups
Why Sanoid
My home server runs Ubuntu 24.04 on a small 120GB SSD, with the bulk of my data stored on two ZFS mirrors, one for speed, the other for bulk:
riaz@server:~$ zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
data 3.62T 2.87T 775G - - 6% 79% 1.00x ONLINE -
fast 464G 379G 84.6G - - 37% 81% 1.00x ONLINE -
This is not a terribly complex system, but it works well enough for me. If one drive fails, I go to the shop and buy a replacement. If my OS drive fails, I simply install Ubuntu on a new drive and import the zpools. My server setup scripts live on the zpools so I just install the OS, import the pools, and then run something along the lines of bash fast/scripts/setup_server.sh to set up the server just how I like it.
Although these zpools are very resilient, one needs to make backups. Why? Well, the entire machine could get fried for some reason. Or a burglar could steal my machine. Or I could enter rm -rf /data by mistake one day.
The best utility I have found for backing up ZFS datasets is sanoid and syncoid. It automates (via cron) the periodic snapshotting of datasets (so you can rollback if needed) and the sending of snapshots to other zpools.
Here is how I have set it up. I have two backup destinations, backup1 and backup2. Each of them have a dataset, called data and fast respectively. Every time I run backup.sh, the backup zpools are imported if they are available, the latest snapshot from the source zpools get sent to the destination zpools, and then they are exported again. Because syncoid just sends snapshots, syncoid only sends the data that has changed since the last backup, which makes it quite quick.
This gives me something like a ‘hot swap’ functionality where I can plug in a drive, run the script, grab it and go.
┌─────────────────────┐ ┌─────────────────────┐
│ SOURCE │ │ TARGETS │
├─────────────────────┤ ├─────────────────────┤
│ ┌───────────────┐ │ │ ┌───────────────┐ │
│ │ zpool:data │──┼───────────▶│ | zpool:backup1 │ │
│ └───────────────┘ │ │ │ ┌─────────┐ │ │
│ │ │ │ │ data │ │ │
│ │ │ │ ├─────────┤ │ │
│ │ │ │ │ fast │ │ │
│ │ │ └──┴─────────┴──┘ │
│ │ │ ┌───────────────┐ │
│ ┌───────────────┐ │ │ │ zpool:backup2 │ │
│ │ zpool:fast │──┼───────────▶│ │ ┌─────────┐ │ │
│ └───────────────┘ │ │ │ │ data │ │ │
│ │ │ │ ├─────────┤ │ │
│ │ | │ │ fast │ │ │
│ │ │ └──┴─────────┴──┘ │
└─────────────────────┘ └─────────────────────┘
════════════════════▶
syncoid transfers
Setup
sudo apt install sanoid
sudo mkdir /etc/sanoid
Sanoid Conf
Here’s what my etc/sanoid/sanoid.conf looks like. It snapshots everything, daily, and keeps 30 days worth of snapshots.
There are some exceptions - these datasets have large files that change frequently, which would bloat my drives very quickly if snapshotted. But I also want to back them up. So, I just keep 1 snapshot, which I only really use to send to the backup drives via my backup script.
When installing sanoid via the apt package manager, a systemd service is automatically set up to run sanoid every 15 minutes and snapshot/prune as necessary.
# GLOBAL SETTINGS #
[data]
recursive = yes
use_template = daily
[fast]
recursive = yes
use_template = daily
# OVERRIDES #
[data/media]
use_template = one_day
[fast/appdata/mediaserver]
use_template = one_day
[fast/machines/images]
use_template = one_day
# TEMPLATES #
[template_daily]
frequently = 0
hourly = 0
daily = 30
monthly = 0
yearly = 0
autosnap = yes
autoprune = yes
[template_one_day]
frequently = 0
hourly = 0
daily = 1
monthly = 0
yearly = 0
autosnap = yes
autoprune = yes
[template_ignore]
autoprune = no
autosnap = no
monitor = no
Syncoid Script
Once sanoid is happily snapshotting, I can run my backup script to send these snapshots to my backup drives. The script can certainly be simpler, but I got a bit carried away with printing output in nice colours. Basically, here’s what it does:
- Checks if backup pools are available, or if they need importing. Import then if they are available for import.
- Loop through backup zpool targets
- Send the latest snapshots of all child datasets of
dataandfastzpools to backup target, excluding the ‘big file datasets’. Note that I use--no-sync-snap. This is to avoid the creation of dangling syncoid snapshots. - If I pass the
--fullargument, then it’ll back up the big file datasets as well. These won’t be incremental backups - they will destroy the dataset on the backup destination, and perform a full transfer. So this is slow, and I only do it for a special reason - for instance if I am travelling away from home for a while and want to keep a full backup safe.
#!/bin/bash
set -euo pipefail
# Check for root/elevated permissions
if [[ $EUID -ne 0 ]]; then
echo "Error: This script must be run with elevated permissions (root)." >&2
echo "Please run with: sudo $0" >&2
exit 1
fi
# Parse arguments
FULL_BACKUP=false
if [ "${1:-}" == "--full" ]; then
FULL_BACKUP=true
fi
red='\e[1;31m%s\e[0m\n'
green='\e[1;32m%s\e[0m\n'
yellow='\e[1;33m%s\e[0m\n'
blue='\e[1;34m%s\e[0m\n'
magenta='\e[1;35m%s\e[0m\n'
cyan='\e[1;36m%s\e[0m\n'
for pool in backup1 backup2; do
# Check if pool is already imported
pool_mounted=$(zpool list | grep "$pool" || true)
if [ -z "$pool_mounted" ]; then
# Pool not mounted, check if available for import
pool_available=$(zpool import 2>/dev/null | grep "$pool" | wc -l || echo "0")
if [ "$pool_available" == 0 ]; then
printf "\n$cyan" "$pool pool not available for import and not already mounted. Skipping..."
else
printf "\n$cyan" "Importing $pool zpool...."
zpool import "$pool"
pool_mounted=$(zpool list | grep "$pool" || true)
fi
else
printf "\n$cyan" "$pool pool is already imported."
fi
if [ -z "$pool_mounted" ]
then
printf "\n$cyan" "Skipping backing up zpool data because zpool $pool is not mounted."
else
printf "\n$cyan" "$pool pool is mounted. Backing up server zpool data to zpool $pool..."
syncoid -R --identifier="$pool" --no-sync-snap --skip-parent --force-delete --no-privilege-elevation --exclude=data/media data "$pool/data"
printf "\n$cyan" "Backing up server zpool fast to zpool $pool..."
syncoid -R --identifier="$pool" --no-sync-snap --skip-parent --force-delete --no-privilege-elevation --exclude=fast/machines/images --exclude=fast/appdata/mediaserver fast "$pool/fast"
if [ "$FULL_BACKUP" == true ]; then
printf "\n$cyan" "Full backup: Destroying and recreating $pool/data/media..."
zfs destroy -r "$pool/data/media" 2>/dev/null || true
printf "\n$cyan" "Full backup: Backing up data/media to $pool..."
syncoid --identifier="$pool" --no-privilege-elevation data/media "$pool/data/media"
printf "\n$cyan" "Full backup: Destroying and recreating $pool/fast/machines/images..."
zfs destroy -r "$pool/fast/machines/images" 2>/dev/null || true
printf "\n$cyan" "Full backup: Backing up fast/machines/images to $pool..."
syncoid --identifier="$pool" --no-privilege-elevation fast/machines/images "$pool/fast/machines/images"
zfs destroy -r "$pool/fast/appdata/mediaserver" 2>/dev/null || true
printf "\n$cyan" "Full backup: Backing up fast/appdata/mediaserver to $pool..."
syncoid --identifier="$pool" --no-privilege-elevation fast/appdata/mediaserver "$pool/fast/appdata/mediaserver"
fi
printf "\n$cyan" "Exporting $pool..."
zpool export "$pool"
fi
done