I'm doing the following test with Debian 9 under Hyper-V: I have two (virtual) disks, each with one partition, and /dev/md0 defined as RAID1 of /dev/sda1 and /dev/sdb1. The root EXT4 file system defined on md0 works well.
Then I reboot the machine removing the sdb disk, by deleting the virtual hardware. Everything works fine, I get an email saying that the array is now clean but degraded:
md0 : active raid1 sda1[0] 4877312 blocks super 1.2 [2/1] [U_] Ok, as expected. I re-attach sdb, creating a new virtual disk using the same disk file and reboot the machine but the system doesn't seem to re-detect the disk. Nothing is changed, the array has still 1 drive and is in clean, degraded state. mdadm --detail /dev/md0 still reports the disk as removed.
I expected the disk to be re-detected, re-attached and re-sync automatically the next boot, since the uuid and disk name match.
I re-added the disk manually, mdadm --manage /dev/md0 --add /dev/sdb1, the system syncs it and the array goes back to clean.
Is this the way the system is supposed to work?
PS: I have a significant number of mdadm: Found some drive for an array that is already active messages at boot. mdadm: giving up.