Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

9
  • When you mentioned btrfs, you mean the designated backup server was using btrfs rather than zfs, right? You may still send the file system to a file on the destination server... Commented Aug 29, 2022 at 9:36
  • 1
    I'm consciously doing a bit of handwaving here because I don't feel I have the in-depth zfs experience needed to speak with any kind of authority, but I do know that you can send to a file, so you would not actually need to have a zfs filesystem waiting on the receiving end, just something big enough to store the filesystem as a file. Sending can, AFAIK, only be done on snapshots, but sending with -R (--replicate) will also send descendent filesystems. Commented Aug 29, 2022 at 9:48
  • 1
    When you say "and also no zpool can be created on the backup server", is that because there are no unused drives/partitions/block-devices on the backup server, or because the zfs module & zfs utils are not installed on it? A zpool can be created using one or more files instead of disks/partitions (just create the files first. see man zpool, search for Example 4). Alternatively, you can create a VM that has a zpool made from one or more virtual disks. Commented Aug 30, 2022 at 0:07
  • 2
    But if all you want is a temporary full backup while rebuilding the system, zfs send-ing to a file is good enough (it only becomes a pain if you also need to keep incremental backups and expire them as they get older...as in the Q you linked to). For just a one-off backup, you can save a full zfs send -R to a file on any kind of filesystem, and restore from it with zfs recv later. It's just a stream of data. Commented Aug 30, 2022 at 0:09
  • 1
    @cas pigz is much better in this case - parallelizing the compression will improve transfer speed considerably. As will avoiding an ssh connection - if it's safe to rsh, that'll be a lot faster, too. Also, use mbuffer in the pipeline as zfs send ... tends to send data in large bursts with relatively long idle periods between the bursts: zfs send ... | mbuffer ... | pigz ... | mbuffer ... | rsh someone@somehost split ... Commented Aug 31, 2022 at 17:24