9

I've set up a Soft Raid 1 using Debians built in RAID systems. I set up the raid because I had a space HDD when I set up the server and thought why not. The RAID is set up using what-ever Debian did when I installed the OS (sorry, not a linux techie).

Now, how-ever I could really use the disk for a much more useful purpose.

Is it easy to discontinue the raid without having to reinstall the OS, and how would I go about doing this?

fdisk -l

Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x000d9640 Device Boot Start End Blocks Id System /dev/sda1 2048 976771071 488384512 fd Linux raid autodetect Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0009dd99 Device Boot Start End Blocks Id System /dev/sdb1 2048 950560767 475279360 83 Linux /dev/sdb2 950562814 976771071 13104129 5 Extended Partition 2 does not start on physical sector boundary. /dev/sdb5 950562816 976771071 13104128 82 Linux swap / Solaris Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x6fa10d6b Device Boot Start End Blocks Id System /dev/sdc1 63 3907024064 1953512001 7 HPFS/NTFS/exFAT Disk /dev/sdd: 7803 MB, 7803174912 bytes 122 heads, 58 sectors/track, 2153 cylinders, total 15240576 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xc3072e18 Device Boot Start End Blocks Id System /dev/sdd1 * 8064 15240575 7616256 b W95 FAT32 

fstab content:

# /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sdb1 during installation UUID=cbc19adf-8ed0-4d20-a56e-13c1a74e9cf0 / ext4 errors=remount-ro 0 1 # swap was on /dev/sdb5 during installation UUID=f6836768-e2b6-4ccf-9827-99f58999607e none swap sw 0 0 /dev/sda1 /media/usb0 auto rw,user,noauto 0 0 /dev/sdc1 /media/mns ntfs-3g defaults 0 2 
2
  • 3
    The output from fdisk isn't consistent with your having a RAID1 volume. It's possible to have partitions with an incorrect type, but even then partition sizes don't match. Post the output of cat /proc/mdstat, cat /proc/partitions, cat /proc/mounts, vgs and cat /sys/block/dm-*/dm/name (I think that should let us conclusively determine what all your disks are being used for). Commented Mar 15, 2015 at 21:37
  • 1
    And also, please, post output of lsblk - it will print a good representation of your block devices layout including device-mapper ones. And also tell us which devices you think could be united in RAID and what mountpoint (you think) RAID partition is mounted to. Commented Mar 15, 2015 at 23:22

2 Answers 2

21

The easiest method, that requires no changes to your setup whatsoever, is probably to reduce the RAID to a single disk. That leaves you the option to add a disk and thus re-use the RAID at a later time.

mdadm /dev/mdx --fail /dev/disky1 mdadm /dev/mdx --remove /dev/disky1 mdadm --grow /dev/mdx --raid-devices=1 --force 

The result would look something like this:

mdx : active raid1 diskx1[3] 62519296 blocks super 1.2 [1/1] [U] 

Ta-daa a single disk "RAID1".

If you want to get rid of the RAID layer altogether, it would involve mdadm --examine /dev/diskx1 (to find out the data offset), mdadm --zero-superblock (to get rid of the RAID metadata), and parted to move the partition by the data offset so it points to the filesystem, and then update bootloader and system configs to reflect the absence of RAID...

4
  • The thing is, i did not use this mdadm program to create the raid. Infact, I've just installed it, and it says there are no raids? Commented Mar 15, 2015 at 15:08
  • 1
    If you're not using mdadm Software RAID, you should be more specific in your question. Commented Mar 15, 2015 at 15:17
  • I did try, but the guy who edited it aparently removed that part, sorry hadn't noticed that. I'm using what-ever Debian did when I installed the OS. Commented Mar 15, 2015 at 15:35
  • 1
    Looking at the fdisk output you provided, there is only one partition marked fd. So I do not think RAID 1 is set up and running on your system. Commented Mar 15, 2015 at 17:40
1

Just fail and remove one of your drives:

mdadm /dev/md0 --fail /dev/sdb --remove /dev/sdb 

After that change your /etc/fstab to use the drive left in RAID.

Reboot. And then destroy your RAID:

mdadm /dev/md0 --destroy 

Have fun :)

2
  • I dont have any drives mounted as md0... My fsdisk -l pastebin.com/uFYmShmT - All I know is that it is one of the 500 gb drives that I need to disconnect. Commented Mar 15, 2015 at 12:39
  • I only see /dev/sda being marked as RAID. Please answer questions under original post. Commented Mar 15, 2015 at 23:27

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.