1

The goal is to create a mirror (aka raid1) device from two devices (to be mirrored), and (if needed) a third device that serves as the dirty region log / metadata / whatever. That, without using LVM or the mdadm.

As I currently understand, the dm-mirror target can be used. Sadly there is zero documentation about it anywhere (as opposed to other DM targets), and AI just endlessly hallucinates.

This is what I have tried, which has hit nearest of all:

0 <size> mirror disk 2 /dev/<meta_log_dev> 256 2 /dev/<member_dev_1> 0 /dev/<member_dev_2> 0 

The device is successfully created, and works. However, obviously, the goal is to make it so if one member of the mirror fails (I/O error), the I/O can continue using the other, healthy device.
When I simulate an I/O error on /dev/<member_dev_2>, the things work out as expected. The second device is shown as D (degraded) in dmsetup status, and operation continues using the 1st one.
However, if I simulate an I/O error on the 1st device (/dev/<member_dev_1>) while the second is OK, the mirror fails completely instead of switching to the 2nd device :(. It logs this:

[5019830.382205] [ T8176] device-mapper: raid1: Mirror read failed. 

And further I/O to the mirror returns I/O error.

So, can I achieve the expected result using dm-mirror, and if yes, how?

PS: I also had an idea to use the raid target directly instead of mirror, but found no good docs either, and could not even a start such a device.

4
  • 1
    Can you reproduce it with LVM? Can you rule out that the other mirror was not in sync? How did you simulate the failures? Commented Nov 17 at 19:54
  • 1
    @frostschutz 1. I'm not sure how a dm-mirror device can be created via LVM; I only see linear, raid, thin, vdo, and similar in my available LVM volume types. 2. They both were in sync. 3. The underlying devices were dm-linear so I used dmsetup remove -f on them so their tables were hot-swapped with error instead of linear. Commented Nov 17 at 20:47
  • 2
    By consulting kernel code at drivers/md/dm-raid1.c I found that the working table 0 <size> mirror disk 2 /dev/<meta_log_dev> 256 2 /dev/<member_dev_1> 0 /dev/<member_dev_2> 0 2 handle_errors keep_log. The difference is the 2 handle_errors keep_log at the end. I'm going to publish that as an answer tomorrow, until somebody gives a more robust explanation, i.e. why is that so. Also it seems that dm-mirror doesn't parallelize reads across the devices; but again, maybe there is an argument to enable it, which I am not aware of Commented Nov 17 at 21:02
  • 1
    I don't really know either. Expect the unexpected when going off the trodden paths... Commented Nov 17 at 21:18

0

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.