I have a mdadm raid 6 file system, which only has 3 disks out of 4 running. I have 4x2tb disks, whenever I add the 4th disk (I tried all week) and do a ls it gives me some file system errors:
$ ll /mnt/downloads/downloads ... d????????? ? ? ? ? ? drivers/ ... But whenever I remove the newly added disk it shows the file system correctly:
$ sudo mdadm /dev/md0 --fail /dev/sde1 mdadm: set /dev/sde1 faulty in /dev/md0 $ ll /mnt/downloads/downloads (correct contents) I have tried zeroing the superblock, doing a sudo wipefs -a /dev/sde1 to wipe the raid related blocks, and all have resulted in the same failure.
Checking the mdadm array with just 3 disks shows no errors, by doing echo check > /sys/block/md0/md/sync_action.
I have tried reading all sectors of the disk to see if it will say there's a bad block but nothing of this sort has occured.
I'm running a sudo badblocks -wsv /dev/sde1 on the disk now, but I doubt any errors will show up.
This has left me very confused, is my disk just bad in some way and disk checks just don't work for some reason?
Or is it something related to me not adding the disk correctly? I ran:
sudo mdadm /dev/md0 -a /dev/sde1 I think I always ran this command while the file system was still mounted, and unmounted it during adding the disk, I don't think this would cause an issue would it?