The goal is to create a mirror (aka raid1) device from two devices (to be mirrored), and (if needed) a third device that serves as the dirty region log / metadata / whatever. That, without using LVM or the mdadm.
As I currently understand, the dm-mirror target can be used. Sadly there is zero documentation about it anywhere (as opposed to other DM targets), and AI just endlessly hallucinates.
This is what I have tried, which has hit nearest of all:
0 <size> mirror disk 2 /dev/<meta_log_dev> 256 2 /dev/<member_dev_1> 0 /dev/<member_dev_2> 0 The device is successfully created, and works. However, obviously, the goal is to make it so if one member of the mirror fails (I/O error), the I/O can continue using the other, healthy device.
When I simulate an I/O error on /dev/<member_dev_2>, the things work out as expected. The second device is shown as D (degraded) in dmsetup status, and operation continues using the 1st one.
However, if I simulate an I/O error on the 1st device (/dev/<member_dev_1>) while the second is OK, the mirror fails completely instead of switching to the 2nd device :(. It logs this:
[5019830.382205] [ T8176] device-mapper: raid1: Mirror read failed. And further I/O to the mirror returns I/O error.
So, can I achieve the expected result using dm-mirror, and if yes, how?
PS: I also had an idea to use the raid target directly instead of mirror, but found no good docs either, and could not even a start such a device.
dmsetup remove -fon them so their tables were hot-swapped witherrorinstead oflinear.drivers/md/dm-raid1.cI found that the working table0 <size> mirror disk 2 /dev/<meta_log_dev> 256 2 /dev/<member_dev_1> 0 /dev/<member_dev_2> 0 2 handle_errors keep_log. The difference is the2 handle_errors keep_logat the end. I'm going to publish that as an answer tomorrow, until somebody gives a more robust explanation, i.e. why is that so. Also it seems that dm-mirror doesn't parallelize reads across the devices; but again, maybe there is an argument to enable it, which I am not aware of