7

By accident, I created a zpool using /dev/sda and so on. I knew this was a bad idea; I just wanted to test my setup. It turned out that it worked so well, I forgot about the device names and started to use my zpool as NAS, still with the sda, sdb, sdc and sdd. My operating system runs from sde. So my zpool is already filled with some data, around 16 TB to be precise. So I wonder whether it is possible to modify the existing pool to use the disks by id, and not by their sdX names?

My NAS has run continuously for about 1 year, and I have never rebooted. Could my zpool get destroyed if I reboot now and some disk names would change? (e.g., because I added some new disks afterwards)

I read that it may be possible using zpool export and zpool import -d /dev/disk/by-id. Will that cause some resilvering to start, or what exactly happens when I export and import? Since it is a lot of data, I would prefer if I don't have to copy my data around. It’s simply too much and would take days. My zpool runs in raidz2 configuration and the operating system is Debian.


This is what I get by zfs list all:

root@pve:~# zfs get all NAME PROPERTY VALUE SOURCE tank type filesystem - tank creation Sat May 12 15:22 2018 - tank used 1.00T - tank available 4.26T - tank referenced 981G - tank compressratio 1.02x - tank mounted no - tank quota none default tank reservation none default tank recordsize 128K default tank mountpoint /tank default tank sharenfs off default tank checksum on default tank compression lz4 local tank atime off local tank devices on default tank exec on default tank setuid on default tank readonly off default tank zoned off default tank snapdir hidden default tank aclinherit restricted default tank createtxg 1 - tank canmount on default tank xattr on default tank copies 1 default tank version 5 - tank utf8only off - tank normalization none - tank casesensitivity sensitive - tank vscan off default tank nbmand off default tank sharesmb off default tank refquota none default tank refreservation none default tank guid 18018951160716445859 - tank primarycache all default tank secondarycache all default tank usedbysnapshots 100M - tank usedbydataset 981G - tank usedbychildren 47.5G - tank usedbyrefreservation 0B - tank logbias latency default tank dedup off default tank mlslabel none default tank sync standard default tank dnodesize legacy default tank refcompressratio 1.02x - tank written 0 - tank logicalused 1004G - tank logicalreferenced 997G - tank volmode default default tank filesystem_limit none default tank snapshot_limit none default tank filesystem_count none default tank snapshot_count none default tank snapdev hidden default tank acltype off default tank context none default tank fscontext none default tank defcontext none default tank rootcontext none default tank relatime off default tank redundant_metadata all default tank overlay off default tank@zfs-auto-snap_monthly-2019-05-13-1847 type snapshot - tank@zfs-auto-snap_monthly-2019-05-13-1847 creation Mon May 13 20:47 2019 - tank@zfs-auto-snap_monthly-2019-05-13-1847 used 0B - tank@zfs-auto-snap_monthly-2019-05-13-1847 referenced 953G - tank@zfs-auto-snap_monthly-2019-05-13-1847 compressratio 1.01x - tank@zfs-auto-snap_monthly-2019-05-13-1847 devices on default tank@zfs-auto-snap_monthly-2019-05-13-1847 exec on default tank@zfs-auto-snap_monthly-2019-05-13-1847 setuid on default tank@zfs-auto-snap_monthly-2019-05-13-1847 createtxg 6244379 - tank@zfs-auto-snap_monthly-2019-05-13-1847 xattr on default tank@zfs-auto-snap_monthly-2019-05-13-1847 version 5 - 

I tried to mount it with zfs mount -a, but that fails because the directory /tank is not empty – the other ZFS datasets are in there...

root@pve:~# zpool list -v NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT tank 5.44T 987G 4.47T - 1% 17% 1.00x ONLINE - ata-WDC_WD20EFRX-68EUZN0_WD-WCC4M1KZNLPE 1.81T 326G 1.49T - 1% 17% ata-WDC_WD20EFRX-68EUZN0_WD-WCC4M1YV1ADT 1.81T 329G 1.49T - 1% 17% ata-WDC_WD20EFRX-68EUZN0_WD-WCC4M2CE10DJ 1.81T 332G 1.49T - 1% 17% 
0

1 Answer 1

11

The zpool won't get destroyed if the names of the disks change. The pool will very likely not import automatically, but data should not be destroyed. Unless there is a script or mechanism which doing things on blockdevices and pathes like /dev/sda are hardcoded and things run without sanity checks. But normally your data is safe.

Importing the pool with zpool import -d /dev/disk/by-id <pool-name> is also save to use. There is no resilvering needed and as far as I can tell the /etc/zfs/zpool.cache file is updated with the new paths and the ondisk metadata as well.

5
  • I just did zpool export tank and then zpool import tank -d /dev/disk/by-id. Wow. That gives me an error! a pool with that name already exists. I have a few subvolumes on my zpool, these show up in the zpool status, but the directory /tank, where normally my files are, is empty! zpool list however still shows that the space is actually used. But how do I get my files back? Commented May 18, 2019 at 17:53
  • Do a zpool export -a and then show the output of zpool import -d /dev/disk-by-id without giving the name. Just to list the available pools. Commented May 18, 2019 at 17:58
  • The message you are seeing a pool with that name already exists indicates that the pool was not exported correctly. Check with zpool list -v after you did a export to ensure that the pool was exported. Commented May 18, 2019 at 18:04
  • zpool import -d /dev/disk-by-id shows: no pools available to import. However, zfs get all shows mounted: no Commented May 18, 2019 at 18:06
  • The zfs command manages the filesets which are part of the pool. Use zpool list -v to list imported pools. The pool seems not to be exported. Use zpool export -a. Commented May 18, 2019 at 18:08

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.