It lets you patch/upgrade an isolated environment without touching the running bits, reboot into that environment, and if things aren't working well boot back into the last known-good one.
Sounds a lot like the A/B update method used widely in Android and to a lesser extend for embedded GNU/Linux OTA updates. But it uses two distinct boot partitions. Since ZFS is involved here, I assume that boot environments take advantage of its copy-on-write mechanism to avoid duplicating the entire boot dataset.
NixOS and Guix use a concept called 'system generations' to do the same without the support of the filesystem. LibOSTree can do the same and is called 'atomic rollback'.
Talking about NixOS, does anybody know of a similar concept in the BSD world (preferably FreeBSD)?
Is zfs really worth the hassle, for someone who does not have time to play "home sysadmin" more than once or twice a year?
I've just rebuilt my little home server (mostly for samba, plus a little bit of docker for kids to play with). It has a hardware raid1 enclosure, with 2TB formatted as ext4, and the really important stuff is sent to the cloud every night. Should I honestly bother learning zfs...? I see it popping up more and more but I just can't see the benefits for occasional use.
IMHO, there's not much hassle anymore, unless you seek it out. The FreeBSD installer will install to zfs just as well as ufs. This article seems to not take the least hassle path.
Backups using zfs snapshots are pretty nice; you can pretty easily do incremental updates. zfs scrub is great to have. FreeBSD UFS also has snapshots, but doesn't have a mechanism to check data integrity: fsck checks for well formed metadata only. I don't think ext4 has snapshots or data integrity checking, but I haven't looked at it much.
There are articles and people claiming you need ECC to run zfs or that you need an unreasonable amount of memory. ECC is nice to have, but running ZFS without ECC isn't worse than running any other filesystem without ECC; and you only really need a large amount of ram if you run with deduplication enabled, but very few use cases benefit from deduplication, so the better advice is to ensure you don't enable dedup. I wouldn't necessarily run zfs on something with actually small memory like a router, but then those usually have a specialized flash filesystem and limited writes anyway.
If you are interested in keeping backups, including the ability to go back in time to recover accidentally deleted/changed files, then ZFS with its reliable snapshot facility is fantastic. Other file systems offer some version of this, e.g. btrfs, but they don't have the same reliability as ZFS.
Snapshots on ZFS are extremely cheap, since it works on the block level, so snapshots every hour or even 15 minutes are now doable if you so wish. Combine with weekly or monthly snapshots that can be replicated off-site, and you have a pretty robust storage system.
This is all home sysadmin stuff to be sure, but even if you just use it as a plain filesystem, the checksum integrity guarantees are worth the price of admission IMO.
FWIW, software RAID like ZFS mirrors or mdm is often superior to hardware raid especially for home use. If your raid controller goes blooey, which does happen, unless you have the exact same controller to replace it, you run a chance of not being able to mount your drives. Even very basic computers are fast enough to saturate the drives in software these days.
I found thst ZFS to be very simple to understand, everything is controlled by just two commands. Datasets are huge win over partitions which seem like such a weird relic of the past once you have tried datasets. Fairly confident you can grasp ZFS in a hour or 2, you can even make a zfs pool from files to mess around with.
Learning effort aside, there’s also the ZFS hardware requirements issue. I bought a four bay NAS couple years ago and looked into TrueNAS. I (somewhat) remember coming across details such as ZFS benefitting from larger amounts of ECC RAM and higher number of drives than what I had. This post covers details about different types of caches and resource requirements:
The biggest advantage of ZFS from a operational experience, is that when you have problems, ZFS tells you why. Checksum errors? Something wrong with the hard drive or SATA/SAS cables. Is the disk slow, zfs events will tell you that it spent more than 5 seconds to read sector x from disk /dev/sdf. The zfs cli commands are super-intuitive, and makes fully sense. Compared to ie. virsh, which is just weird for managing vm's.
It definitely worth the hassle. But if everything works fine for you now, don't bother. ZFS is not going away and you can learn it later.
zfs is the furthest thing from hassle, really trivial to use and manage. you'll sit down to do some kind of unhinged change to your infrastructure and it will end up taking 3 command line commands that complete instantly and then you will think, "huh, that was easy" and go back to the rest of your life
This is getting lots of upvotes and rightfully so. I think people would love more posts about FreeBSD: especially about ZFS and bhyve (the FreeBSD hypervisor).
It's a bit sad that this Lenovo ThinkCentre ain't using ECC. I use and know ZFS is good but I'd prefer to run it on a machine supporting ECC.
I never tried FreeBSD but I'm reading more and more about it and it looks like although FreeBSD has always had its regular users, there are now quite some people curious about trying it out. For a variety of reasons. The possibility of having ZFS by default and an hypervisor without systemd is a big one for me (I run Proxmox so I'm halfway there but bhyve looks like it'd allow me to be completely systemd free).
I'm running systemd-free VMs and systemd-free containers (long live non-systemd PID ones) so bhyve looks like it could the final piece of the puzzle to be free of Microsoft/Poettering's systemd.
Is your desktop or laptop using ECC? For data that you are actively modifying the time that it spends on non-ECC RAM on the server is trivial compared to your desktop or laptop.
I'll take ZFS without ECC over hardware RAID with ECC any day.
You express a desire for more FreeBSD posts and then immediately wade into all the typical flame-warring that surrounds most BSD/ZFS posts (systemd, ECC RAM), and it's been that way for over a decade at this point.
When setting up root-on-ZFS on FreeBSD, it's worth knowing about boot environments (a concept originally from Solaris):
* https://klarasystems.com/articles/managing-boot-environments...
* https://wiki.freebsd.org/BootEnvironments
* https://man.freebsd.org/cgi/man.cgi?query=bectl
* https://dan.langille.org/category/open-source/freebsd/bectl/
* https://vermaden.wordpress.com/2022/03/14/zfs-boot-environme...
It lets you patch/upgrade an isolated environment without touching the running bits, reboot into that environment, and if things aren't working well boot back into the last known-good one.
I would add these to the list:
- https://is.gd/BECTL
- https://vermaden.wordpress.com/2025/11/25/zfs-boot-environme...
Sounds a lot like the A/B update method used widely in Android and to a lesser extend for embedded GNU/Linux OTA updates. But it uses two distinct boot partitions. Since ZFS is involved here, I assume that boot environments take advantage of its copy-on-write mechanism to avoid duplicating the entire boot dataset.
NixOS and Guix use a concept called 'system generations' to do the same without the support of the filesystem. LibOSTree can do the same and is called 'atomic rollback'.
Talking about NixOS, does anybody know of a similar concept in the BSD world (preferably FreeBSD)?
> Talking about NixOS, does anybody know of a similar concept in the BSD world (preferably FreeBSD)?
Well, there's https://github.com/nixos-bsd/nixbsd :)
Best feature of freebsd. I have really messed up the system and successfully restored a boot environment snapshot and everything is fine after.
It happens by default with freebsd-update (I hope the new pkg replacement still does it too)
oh, i didnt knew the concept is taken from Solaris, which version of Solaris? and is there any official source that indicates it is from Solaris?
> bectl and this manual page were derived from beadm(8).
* https://man.freebsd.org/cgi/man.cgi?query=bectl#end
> beadm(1M) originally appeared in Solaris.
* https://man.freebsd.org/cgi/man.cgi?query=beadm#end
Solaris Live Upgrade BEs worked with (mirrored) UFS root:
* https://docs.oracle.com/cd/E18752_01/html/821-1910/chapter-5...
* https://www.filibeto.org/sun/lib/solaris8-docs/_solaris8_2_0...
It allowed/s for migration from UFS to ZFS root:
* https://docs.oracle.com/cd/E23823_01/html/E23801/ggavn.html
Is zfs really worth the hassle, for someone who does not have time to play "home sysadmin" more than once or twice a year?
I've just rebuilt my little home server (mostly for samba, plus a little bit of docker for kids to play with). It has a hardware raid1 enclosure, with 2TB formatted as ext4, and the really important stuff is sent to the cloud every night. Should I honestly bother learning zfs...? I see it popping up more and more but I just can't see the benefits for occasional use.
I'd avoid hardware RAID controllers when using ZFS, unless you can put it into "IT" mode or equivalent.
IMHO, there's not much hassle anymore, unless you seek it out. The FreeBSD installer will install to zfs just as well as ufs. This article seems to not take the least hassle path.
Backups using zfs snapshots are pretty nice; you can pretty easily do incremental updates. zfs scrub is great to have. FreeBSD UFS also has snapshots, but doesn't have a mechanism to check data integrity: fsck checks for well formed metadata only. I don't think ext4 has snapshots or data integrity checking, but I haven't looked at it much.
There are articles and people claiming you need ECC to run zfs or that you need an unreasonable amount of memory. ECC is nice to have, but running ZFS without ECC isn't worse than running any other filesystem without ECC; and you only really need a large amount of ram if you run with deduplication enabled, but very few use cases benefit from deduplication, so the better advice is to ensure you don't enable dedup. I wouldn't necessarily run zfs on something with actually small memory like a router, but then those usually have a specialized flash filesystem and limited writes anyway.
If you are interested in keeping backups, including the ability to go back in time to recover accidentally deleted/changed files, then ZFS with its reliable snapshot facility is fantastic. Other file systems offer some version of this, e.g. btrfs, but they don't have the same reliability as ZFS.
Snapshots on ZFS are extremely cheap, since it works on the block level, so snapshots every hour or even 15 minutes are now doable if you so wish. Combine with weekly or monthly snapshots that can be replicated off-site, and you have a pretty robust storage system.
This is all home sysadmin stuff to be sure, but even if you just use it as a plain filesystem, the checksum integrity guarantees are worth the price of admission IMO.
FWIW, software RAID like ZFS mirrors or mdm is often superior to hardware raid especially for home use. If your raid controller goes blooey, which does happen, unless you have the exact same controller to replace it, you run a chance of not being able to mount your drives. Even very basic computers are fast enough to saturate the drives in software these days.
I found thst ZFS to be very simple to understand, everything is controlled by just two commands. Datasets are huge win over partitions which seem like such a weird relic of the past once you have tried datasets. Fairly confident you can grasp ZFS in a hour or 2, you can even make a zfs pool from files to mess around with.
Learning effort aside, there’s also the ZFS hardware requirements issue. I bought a four bay NAS couple years ago and looked into TrueNAS. I (somewhat) remember coming across details such as ZFS benefitting from larger amounts of ECC RAM and higher number of drives than what I had. This post covers details about different types of caches and resource requirements:
https://www.45drives.com/community/articles/zfs-caching/
The biggest advantage of ZFS from a operational experience, is that when you have problems, ZFS tells you why. Checksum errors? Something wrong with the hard drive or SATA/SAS cables. Is the disk slow, zfs events will tell you that it spent more than 5 seconds to read sector x from disk /dev/sdf. The zfs cli commands are super-intuitive, and makes fully sense. Compared to ie. virsh, which is just weird for managing vm's.
It definitely worth the hassle. But if everything works fine for you now, don't bother. ZFS is not going away and you can learn it later.
zfs is the furthest thing from hassle, really trivial to use and manage. you'll sit down to do some kind of unhinged change to your infrastructure and it will end up taking 3 command line commands that complete instantly and then you will think, "huh, that was easy" and go back to the rest of your life
This is getting lots of upvotes and rightfully so. I think people would love more posts about FreeBSD: especially about ZFS and bhyve (the FreeBSD hypervisor).
It's a bit sad that this Lenovo ThinkCentre ain't using ECC. I use and know ZFS is good but I'd prefer to run it on a machine supporting ECC.
I never tried FreeBSD but I'm reading more and more about it and it looks like although FreeBSD has always had its regular users, there are now quite some people curious about trying it out. For a variety of reasons. The possibility of having ZFS by default and an hypervisor without systemd is a big one for me (I run Proxmox so I'm halfway there but bhyve looks like it'd allow me to be completely systemd free).
I'm running systemd-free VMs and systemd-free containers (long live non-systemd PID ones) so bhyve looks like it could the final piece of the puzzle to be free of Microsoft/Poettering's systemd.
Is your desktop or laptop using ECC? For data that you are actively modifying the time that it spends on non-ECC RAM on the server is trivial compared to your desktop or laptop.
I'll take ZFS without ECC over hardware RAID with ECC any day.
You express a desire for more FreeBSD posts and then immediately wade into all the typical flame-warring that surrounds most BSD/ZFS posts (systemd, ECC RAM), and it's been that way for over a decade at this point.
"I think people would love more posts about FreeBSD" Translate to: "I would love more post..."
other filesystems are just as susceptible to data corruption from memory errors. this is not a weakness unique to ZFS.