I have a zpool with around 6TB of data on it (including snapshots for the child datasets). For the important data in it, I already have backups on filesystem level. As I need to perform some rather "dangerous" operations (i.e. this pool migration), I want to backup the whole pool to a different server.
In an ideal world, I would have used send and recv (like so). Unfortunately, this server is btrfs-based with no option to install zfs. When researching, people recommend just plain rsync (e.g. here) but as far as I can see, I would be back to filesystem level and more importantly, I'm not sure if the already existing dataset snapshots would still be intact. I basically just want to "freeze" the entire pool with a snapshot, send that off to the remote server and in case something goes wrong, restore the pool to the previous state (with ideally one command).
Therefore I'm looking for a solution to backup an entire zfs to a different server running a different filesystem, keeping all datasets and snapshots intact
-R(--replicate) will also send descendent filesystems.man zpool, search for Example 4). Alternatively, you can create a VM that has a zpool made from one or more virtual disks.zfs send-ing to a file is good enough (it only becomes a pain if you also need to keep incremental backups and expire them as they get older...as in the Q you linked to). For just a one-off backup, you can save a fullzfs send -Rto a file on any kind of filesystem, and restore from it withzfs recvlater. It's just a stream of data.pigzis much better in this case - parallelizing the compression will improve transfer speed considerably. As will avoiding ansshconnection - if it's safe torsh, that'll be a lot faster, too. Also, usembufferin the pipeline aszfs send ...tends to send data in large bursts with relatively long idle periods between the bursts:zfs send ... | mbuffer ... | pigz ... | mbuffer ... | rsh someone@somehost split ...