Mirroring in Linux

SUSE Linux Enterprise Server 9 (2.6.5-7.97-default) Minimum install
SUSE Linux Enterprise Server 9 (2.6.5-7.97-default) Full install
SUSE Linux Enterprise Server 8 (2.4.21-286-default) Full install
Installed/tested with VMWare Workstation 5.0.0 build-13124
Installed/tested with Dell Precision 420 with IDE drives

/dev/sda – Installed non-raid system disk
/dev/sda1 – swap partition
/dev/sda2 – root partition
/dev/sdb – Empty disk for first raid mirror
/dev/md1 – swap mirrored partition
/dev/md2 – root mirrored partition Prepare the non-RAID Disk

*note* You backed up your system, right?

linux:~ # cat /proc/mdstat
Personalities : 
unused devices: <none>

Confirm that both disks are the same size.

linux:~ # cat /proc/partitions 
major minor  #blocks  name

   8     0    2097152 sda
   8     1     514048 sda1
   8     2    1582402 sda2
   8    16    2097152 sdb

It is recommended the migration be performed in run level 1 to minimize corruption possibilities.

linux:~ # init 1

linux:~ # cat /etc/inittab | grep default:
id:3:initdefault:

linux:~ # vi /etc/inittab

Change the default run level to 1.

linux:~ # cat /etc/inittab | grep default:
id:1:initdefault:

Make sure that your devices do not have labels and that you are referencing the disks by device name.

linux:~ # cat /etc/fstab
/dev/sda2            /                    reiserfs   defaults              1 1
/dev/sda1            swap                 swap       pri=42                0 0
devpts               /dev/pts             devpts     mode=0620,gid=5       0 0
proc                 /proc                proc       defaults              0 0
usbfs                /proc/bus/usb        usbfs      noauto                0 0
sysfs                /sys                 sysfs      noauto                0 0
/dev/dvd             /media/dvd           subfs      fs=cdfss,ro,procuid,nosuid,nodev,exec,iocharset=utf8 0 0
/dev/fd0             /media/floppy        subfs      fs=floppyfss,procuid,nodev,nosuid,sync 0 0

linux:~ # cat /boot/grub/menu.lst 
# Modified by YaST2. Last modification on Fri May  6 15:31:59 2005

color white/blue black/light-gray
default 0
timeout 8

title Linux
    kernel (hd0,1)/boot/vmlinuz root=/dev/sda2 selinux=0 splash=0 resume=/dev/sda1 showopts elevator=cfq vga=0x314
    initrd (hd0,1)/boot/initrd

linux:~ # fdisk /dev/sda

Change the partition type on the existing non-raid disk to type ‘fd’ (Linux raid autodetect).

Command (m for help): p

Disk /dev/sda: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1          64      514048+  82  Linux swap
/dev/sda2   *          65         261     1582402+  83  Linux

Command (m for help): t
Partition number (1-4): 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): t
Partition number (1-4): 2
Hex code (type L to list codes): fd
Changed system type of partition 2 to fd (Linux raid autodetect)

Command (m for help): p

Disk /dev/sda: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1          64      514048+  fd  Linux raid autodetect
/dev/sda2   *          65         261     1582402+  fd  Linux raid autodetect

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.

Copy the non-raid disk’s partition to the empty disk.

linux:~ # sfdisk -d /dev/sda > partitions.txt

linux:~ # sfdisk /dev/sdb < partitions.txt 
Checking that no-one is using this disk right now ...
OK

Disk /dev/sdb: 261 cylinders, 255 heads, 63 sectors/track

sfdisk: ERROR: sector 0 does not have an msdos signature
 /dev/sdb: unrecognized partition
Old situation:
No partitions found
New situation:
Units = sectors of 512 bytes, counting from 0

   Device Boot    Start       End   #sectors  Id  System
/dev/sdb1            63   1028159    1028097  fd  Linux raid autodetect
/dev/sdb2   *   1028160   4192964    3164805  fd  Linux raid autodetect
/dev/sdb3             0         -          0   0  Empty
/dev/sdb4             0         -          0   0  Empty
Successfully wrote the new partition table

Re-reading the partition table ...

If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes:  dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)

linux:~ # cat /proc/partitions 
major minor  #blocks  name

   8     0    2097152 sda
   8     1     514048 sda1
   8     2    1582402 sda2
   8    16    2097152 sdb
   8    17     514048 sdb1
   8    18    1582402 sdb2

linux:~ # reboot

Select the non-raid disk boot (Linux) Prepare the degraded RAID array

Create the degraded RAID array on the empty disk, but leave out the existing system disk for now.

linux:~ # mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdb1 missing
mdadm: array /dev/md1 started.

linux:~ # mdadm --create /dev/md2 --level=1 --raid-devices=2 /dev/sdb2 missing
mdadm: array /dev/md2 started.

linux:~ # mkswap /dev/md1
Setting up swapspace version 1, size = 526315 kB

linux:~ # mkreiserfs /dev/md2
mkreiserfs 3.6.13 (2003 www.namesys.com)

A pair of credits:
Alexander Zarochentcev  (zam)  wrote the high low priority locking code, online
resizer for V3 and V4, online repacker for V4, block allocation code, and major
parts of  the flush code,  and maintains the transaction manager code.  We give
him the stuff  that we know will be hard to debug,  or needs to be very cleanly
structured.

Yury Umanets  (aka Umka)  developed  libreiser4,  userspace  plugins,  and  all
userspace tools (reiser4progs) except of fsck.

Guessing about desired format.. Kernel 2.6.5-7.97-default is running.
Format 3.6 with standard journal
Count of blocks on the device: 395584
Number of blocks consumed by mkreiserfs formatting process: 8224
Blocksize: 4096
Hash function used to sort names: "r5"
Journal Size 8193 blocks (first block 18)
Journal Max transaction length 1024
inode generation number: 0
UUID: 9adb347e-32d7-46e4-bd83-8547201e139b
ATTENTION: YOU SHOULD REBOOT AFTER FDISK!
	ALL DATA WILL BE LOST ON '/dev/md2'!
Continue (y/n):y

Initializing journal - 0%....20%....40%....60%....80%....100%
Syncing..ok
ReiserFS is successfully created on /dev/md2.

linux:~ # cat /proc/mdstat
Personalities : [raid1] 
md2 : active raid1 sdb2[0]
      1582336 blocks [2/1] [U_]

md1 : active raid1 sdb1[0]
      513984 blocks [2/1] [U_]

unused devices: <none>

Create the degraded RAID array configuration file.

linux:~ # cat << EOF > /etc/mdadm.conf
> DEVICE /dev/sdb1 /dev/sdb2
> ARRAY /dev/md1 devices=/dev/sdb1,missing
> ARRAY /dev/md2 devices=/dev/sdb2,missing
> EOF

linux:~ # cat /etc/mdadm.conf
DEVICE /dev/sdb1 /dev/sdb2
ARRAY /dev/md1 devices=/dev/sdb1,missing
ARRAY /dev/md2 devices=/dev/sdb2,missing

Confirm the degraded RAID array is functioning with only the previously empty disk.

linux:~ # mdadm --detail --scan
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=66e0c793:ebb91af6:f1d5cde8:81f9b986
   devices=/dev/sdb2
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=0c70c3f5:28556506:9bd29f42:0486b2ea
   devices=/dev/sdb1

linux:~ # mdadm --stop --scan

linux:~ # mdadm --detail --scan

WARNING: Make sure you have created the /etc/mdadm.conf above, or `mdadm –assemble –scan` fails.

linux:~ # mdadm --assemble --scan
mdadm: /dev/md1 has been started with 1 drive (out of 2).
mdadm: /dev/md2 has been started with 1 drive (out of 2).

linux:~ # mdadm --detail --scan
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=66e0c793:ebb91af6:f1d5cde8:81f9b986
   devices=/dev/sdb2
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=0c70c3f5:28556506:9bd29f42:0486b2ea
   devices=/dev/sdb1

Backup the original initrd.

linux:~ # cd /boot

linux:/boot # ls -l initrd*
lrwxrwxrwx   1 root root      25 May  6 09:31 initrd -> initrd-2.6.5-7.97-default
-rw-r--r--   1 root root 1324138 May  6 09:31 initrd-2.6.5-7.97-default

linux:/boot # mv initrd-2.6.5-7.97-default initrd-2.6.5-7.97-default.orig

linux:/boot # head /etc/sysconfig/kernel | grep INITRD_MODULES
INITRD_MODULES="mptscsih reiserfs"

linux:/boot/grub # vi /etc/sysconfig/kernel

Add the raid1 module to the INITRD_MODULES list and recreate the initrd

linux:/boot # head /etc/sysconfig/kernel | grep INITRD_MODULES
INITRD_MODULES="raid1 mptscsih reiserfs"

linux:/boot # mkinitrd
Root device:	/dev/sda2 (mounted on / as reiserfs)
Module list:	raid1 mptscsih reiserfs

Kernel image:	/boot/vmlinuz-2.6.5-7.97-default
Initrd image:	/boot/initrd-2.6.5-7.97-default
Shared libs:	lib/ld-2.3.3.so lib/libc.so.6 lib/libselinux.so.1 
Modules:	kernel/drivers/scsi/scsi_mod.ko kernel/drivers/scsi/sd_mod.ko kernel/drivers/md/raid1.ko kernel/drivers/message/fusion/mptbase.ko kernel/drivers/message/fusion/mptscsih.ko kernel/fs/reiserfs/reiserfs.ko 
Including:	raidautorun

WARNING: If you attempt to boot the degraded RAID array, without referencing an initrd that contains the raid1 driver or raidautorun, then you will get a message that the /dev/md2 device is not found, and the server hangs. You will need to follow the recovery process described in the warning on page 10, and recreate the initrd.

linux:/boot # ls -l initrd*
lrwxrwxrwx   1 root root      25 May  6 16:12 initrd -> initrd-2.6.5-7.97-default
-rw-r--r--   1 root root 1347989 May  6 16:12 initrd-2.6.5-7.97-default
-rw-r--r--   1 root root 1324138 May  6 09:31 initrd-2.6.5-7.97-default.orig

linux:/boot # cd /boot/grub

linux:/boot/grub # cat /boot/grub/menu.lst
# Modified by YaST2. Last modification on Fri May  6 15:31:59 2005

color white/blue black/light-gray
default 0
timeout 8

title Linux
    kernel (hd0,1)/boot/vmlinuz root=/dev/sda2 selinux=0 splash=0 resume=/dev/sda1 showopts elevator=cfq vga=0x314
    initrd (hd0,1)/boot/initrd

linux:/boot/grub # vi /boot/grub/menu.lst

Modify the menu.lst so you can boot from the non-RAID or the degraded RAID array, in case you make mistakes during the migration.

linux:/boot/grub # vi /boot/grub/menu.lst

linux:/boot/grub # cat /boot/grub/menu.lst
# Modified by YaST2. Last modification on Fri May  6 15:31:59 2005

color white/blue black/light-gray
default 0
timeout 8

title Linux
    root (hd0,1)
    kernel /boot/vmlinuz root=/dev/sda2 selinux=0 splash=0 resume=/dev/sda1 showopts elevator=cfq vga=0x314
    initrd /boot/initrd-2.6.5-7.97-default.orig

title LinuxRaid
    root (hd1,1)
    kernel /boot/vmlinuz root=/dev/md2 selinux=0 splash=0 resume=/dev/md1 showopts elevator=cfq vga=0x314
    initrd /boot/initrd

Copy the entire system from the non-raid device to the degraded RAID array.

linux:/boot/grub # cd /mnt

linux:/mnt # mkdir /mnt/newroot

linux:/mnt # mount /dev/md2 /mnt/newroot

linux:/mnt # mount
/dev/sda2 on / type reiserfs (rw)
proc on /proc type proc (rw)
tmpfs on /dev/shm type tmpfs (rw)
devpts on /dev/pts type devpts (rw,mode=0620,gid=5)
/dev/hdc on /media/dvd type subfs (ro,nosuid,nodev,fs=cdfss,procuid,iocharset=utf8)
/dev/fd0 on /media/floppy type subfs (rw,nosuid,nodev,sync,fs=floppyfss,procuid)
usbfs on /proc/bus/usb type usbfs (rw)
/dev/md2 on /mnt/newroot type reiserfs (rw)

linux:/mnt # cd /mnt/newroot

linux:/mnt/newroot # ls /
.  ..  bin  boot  dev  etc  home  lib  media  mnt  opt  proc  root  sbin  srv  sys  tmp  usr  var

Do not copy mnt or proc to the degraded RAID array, but create place holders for them.

linux:/mnt/newroot # mkdir mnt proc

WARNING: The /mnt/newroot/proc directory is used for the proc filesystem mount point. If it’s missing, you will get an error saying /proc is not mounted, and the system will hang at boot time.

linux:/mnt/newroot # for i in bin boot dev etc home lib media opt root sbin srv sys tmp var usr
> do
> printf "Copy files: /$i -> /mnt/newroot/$i ... "
> cp -a /$i /mnt/newroot
> echo done
> done
Copy files: /bin -> /mnt/newroot/bin ... done
Copy files: /boot -> /mnt/newroot/boot ... done
Copy files: /dev -> /mnt/newroot/dev ... done
Copy files: /etc -> /mnt/newroot/etc ... done
Copy files: /home -> /mnt/newroot/home ... done
Copy files: /lib -> /mnt/newroot/lib ... done
Copy files: /media -> /mnt/newroot/media ... done
Copy files: /opt -> /mnt/newroot/opt ... done
Copy files: /root -> /mnt/newroot/root ... done
Copy files: /sbin -> /mnt/newroot/sbin ... done
Copy files: /srv -> /mnt/newroot/srv ... done
Copy files: /sys -> /mnt/newroot/sys ... done
Copy files: /tmp -> /mnt/newroot/tmp ... done
Copy files: /var -> /mnt/newroot/var ... done
Copy files: /usr -> /mnt/newroot/usr ... done

WARNING: If you attempt to copy files that have ACL’s, you will get a warning that the original permissions cannot be restored. You will need to restore any ACL’s manually. You may also get some permission denied errors on files in the sys directory. Check the files, but you shouldn’t have to worry about the errors.

linux:/mnt/newroot # ls /
.  ..  bin  boot  dev  etc  home  lib  media  mnt  opt  proc  root  sbin  srv  sys  tmp  usr  var

linux:/mnt/newroot # ls /mnt/newroot
.  ..  bin  boot  dev  etc  home  lib  media  mnt  opt  proc  root  sbin  srv  sys  tmp  usr  var

linux:/mnt/newroot # cd /mnt/newroot/etc

linux:~ # cat /mnt/newroot/etc/fstab
/dev/sda2            /                    reiserfs   defaults              1 1
/dev/sda1            swap                 swap       pri=42                0 0
devpts               /dev/pts             devpts     mode=0620,gid=5       0 0
proc                 /proc                proc       defaults              0 0
usbfs                /proc/bus/usb        usbfs      noauto                0 0
sysfs                /sys                 sysfs      noauto                0 0
/dev/dvd             /media/dvd           subfs      fs=cdfss,ro,procuid,nosuid,nodev,exec,iocharset=utf8 0 0
/dev/fd0             /media/floppy        subfs      fs=floppyfss,procuid,nodev,nosuid,sync 0 0

linux:/mnt/newroot # vi /mnt/newroot/etc/fstab

Modify the fstab file on the degraded RAID array so that the system can boot it.

linux:~ # cat /mnt/newroot/etc/fstab
/dev/md2             /                    reiserfs   defaults              1 1
/dev/md1             swap                 swap       pri=42                0 0
devpts               /dev/pts             devpts     mode=0620,gid=5       0 0
proc                 /proc                proc       defaults              0 0
usbfs                /proc/bus/usb        usbfs      noauto                0 0
sysfs                /sys                 sysfs      noauto                0 0
/dev/dvd             /media/dvd           subfs      fs=cdfss,ro,procuid,nosuid,nodev,exec,iocharset=utf8 0 0
/dev/fd0             /media/floppy        subfs      fs=floppyfss,procuid,nodev,nosuid,sync 0 0

linux:~ # reboot

Select the degraded RAID array boot (LinuxRaid) Add non-RAID disk to existing degraded RAID array

At this point you should be running your system from the degraded RAID array, and the non-raid disk is not even mounted.

linux:~ # mount
/dev/md2 on / type reiserfs (rw)
proc on /proc type proc (rw)
tmpfs on /dev/shm type tmpfs (rw)
devpts on /dev/pts type devpts (rw,mode=0620,gid=5)
/dev/hdc on /media/dvd type subfs (ro,nosuid,nodev,fs=cdfss,procuid,iocharset=utf8)
/dev/fd0 on /media/floppy type subfs (rw,nosuid,nodev,sync,fs=floppyfss,procuid)
usbfs on /proc/bus/usb type usbfs (rw)

linux:~ # lsraid -a /dev/md1
[dev   9,   0] /dev/md1         0C70C3F5.28556506.9BD29F42.0486B2EA online
[dev   8,  17] /dev/sdb1        0C70C3F5.28556506.9BD29F42.0486B2EA good
[dev   ?,   ?] (unknown)        00000000.00000000.00000000.00000000 missing

linux:~ # lsraid -a /dev/md2
[dev   9,   1] /dev/md2         66E0C793.EBB91AF6.F1D5CDE8.81F9B986 online
[dev   8,  18] /dev/sdb2        66E0C793.EBB91AF6.F1D5CDE8.81F9B986 good
[dev   ?,   ?] (unknown)        00000000.00000000.00000000.00000000 missing

Update the raid configuration file to include both disks.

linux:~ # cat << EOF > /etc/mdadm.conf
> DEVICE /dev/sdb1 /dev/sdb2 /dev/sda1 /dev/sda2
> ARRAY /dev/md1 devices=/dev/sdb1,/dev/sda1
> ARRAY /dev/md2 devices=/dev/sdb2,/dev/sda2
> EOF

linux:~ cat /etc/mdadm.conf
DEVICE /dev/sdb1 /dev/sdb2 /dev/sda1 /dev/sda2
ARRAY /dev/md1 devices=/dev/sdb1,/dev/sda1
ARRAY /dev/md2 devices=/dev/sdb2,/dev/sda2

Add the non-raid disk partitions into their respective raid array.

*WARNING*: This is the point of no return.

linux:~ # mdadm /dev/md1 -a /dev/sda1
mdadm: hot added /dev/sda1

linux:~ # mdadm /dev/md2 -a /dev/sda2
mdadm: hot added /dev/sda2

linux:~ # cat /proc/mdstat
Personalities : [raid1] 
md2 : active raid1 sda2[2] sdb2[0]
      1582336 blocks [2/1] [U_]
      [==================>..]  recovery = 92.8% (1469568/1582336) finish=0.3min speed=5058K/sec
md1 : active raid1 sda1[1] sdb1[0]
      513984 blocks [2/2] [UU]

unused devices: <none>

linux:~ # cat /proc/mdstat
Personalities : [raid1] 
md2 : active raid1 sda2[1] sdb2[0]
      1582336 blocks [2/2] [UU]

md1 : active raid1 sda1[1] sdb1[0]
      513984 blocks [2/2] [UU]

unused devices: <none>

You have wiped out grub from your boot sector! Install grub onto both disks in the RAID1 array. You will need to do this manually.

linux:~ # grub

     GNU GRUB  version 0.94  (640K lower / 3072K upper memory)
 [ Minimal BASH-like line editing is supported.  For the first word, TAB
   lists possible command completions.  Anywhere else TAB lists the possible
   completions of a device/filename. ]

grub> device (hd0) /dev/sda
grub> root (hd0,1)
 Filesystem type is reiserfs, partition type 0xfd
grub> setup (hd0)

 Checking if "/boot/grub/stage1" exists... yes
 Checking if "/boot/grub/stage2" exists... yes
 Checking if "/boot/grub/reiserfs_stage1_5" exists... yes
 Running "embed /boot/grub/reiserfs_stage1_5 (hd0)"...  19 sectors are embedded.
succeeded
 Running "install /boot/grub/stage1 (hd0) (hd0)1+19 p (hd0,1)/boot/grub/stage2 /boot/grub/menu.lst"
... succeeded
Done.

grub> device (hd1) /dev/sdb
grub> root (hd1,1)
 Filesystem type is reiserfs, partition type 0xfd
grub> setup (hd1)

 Checking if "/boot/grub/stage1" exists... yes
 Checking if "/boot/grub/stage2" exists... yes
 Checking if "/boot/grub/reiserfs_stage1_5" exists... yesgreat, got it.
 Running "embed /boot/grub/reiserfs_stage1_5 (hd1)"...  19 sectors are embedded.
succeeded
 Running "install /boot/grub/stage1 (hd1) (hd1)1+19 p (hd1,1)/boot/grub/stage2 /boot/grub/menu.lst"
... succeeded
Done.

grub> quit

*WARNING*: If you do not reinstall grub, after rebooting you will get GRUB on screen all by itself. If that happens, boot from your install CD1. Select Installation, your language, and Boot installed system. Once the system is up, follow the steps above to install grub onto the drives.

Remove the original initrd, as it is useless at this point.

linux:/boot # ls -l initrd*
lrwxrwxrwx   1 root root      25 May  6 16:30 initrd -> initrd-2.6.5-7.97-default
-rw-r--r--   1 root root 1347989 May  6 16:12 initrd-2.6.5-7.97-default
-rw-r--r--   1 root root 1324138 May  6 09:31 initrd-2.6.5-7.97-default.orig

linux:/boot # rm /boot/initrd-2.6.5-7.97-default.orig 

linux:/boot # cd grub

linux:/boot/grub # cat /boot/grub/menu.lst
# Modified by YaST2. Last modification on Fri May  6 15:31:59 2005

color white/blue black/light-gray
default 0
timeout 8

title Linux
    root (hd0,1)
    kernel /boot/vmlinuz root=/dev/sda2 selinux=0 splash=0 resume=/dev/sda1 showopts elevator=cfq vga=0x314
    initrd /boot/initrd-2.6.5-7.97-default.orig

title LinuxRaid
    root (hd1,1)
    kernel /boot/vmlinuz root=/dev/md2 selinux=0 splash=0 resume=/dev/md1 showopts elevator=cfq vga=0x314
    initrd /boot/initrd

linux:/boot/grub # vi /boot/grub/menu.lst

Remove the non-raid boot option”it’s now useless. Change the boot disk to (hd0,1), the first disk.

linux:/boot/grub # cat /boot/grub/menu.lst
# Modified by YaST2. Last modification on Fri May  6 15:31:59 2005

color white/blue black/light-gray
default 0
timeout 8

title LinuxRaid
    root (hd0,1)
    kernel /boot/vmlinuz root=/dev/md2 selinux=0 splash=0 resume=/dev/md1 showopts elevator=cfq vga=0x314
    initrd /boot/initrd

linux:~ # cat /etc/inittab | grep default:
id:1:initdefault:

linux:~ # vi /etc/inittab

Change back to your default run level.

linux:~ # cat /etc/inittab | grep default:
id:3:initdefault:

linux:/boot/grub # reboot

You should only have LinuxRaid as a boot option. Reboot and confirm

Congratulations! You have a mirrored SLES9 system that is booting off of the software RAID1 array.

linux:~ # mount
/dev/md2 on / type reiserfs (rw)
proc on /proc type proc (rw)
tmpfs on /dev/shm type tmpfs (rw)
devpts on /dev/pts type devpts (rw,mode=0620,gid=5)
/dev/hdc on /media/dvd type subfs (ro,nosuid,nodev,fs=cdfss,procuid,iocharset=utf8)
/dev/fd0 on /media/floppy type subfs (rw,nosuid,nodev,sync,fs=floppyfss,procuid)
usbfs on /proc/bus/usb type usbfs (rw)

linux:~ # df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/md2              1.6G  421M  1.1G  28% /
tmpfs                 126M  8.0K  126M   1% /dev/shm

linux:~ # mdadm --detail --scan
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=66e0c793:ebb91af6:f1d5cde8:81f9b986
   devices=/dev/sdb2,/dev/sda2
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=0c70c3f5:28556506:9bd29f42:0486b2ea
   devices=/dev/sdb1,/dev/sda1

linux:~ # lsraid -a /dev/md1
[dev   9,   0] /dev/md1         0C70C3F5.28556506.9BD29F42.0486B2EA online
[dev   8,  17] /dev/sdb1        0C70C3F5.28556506.9BD29F42.0486B2EA good
[dev   8,   1] /dev/sda1        0C70C3F5.28556506.9BD29F42.0486B2EA good

linux:~ # lsraid -a /dev/md2
[dev   9,   1] /dev/md2         66E0C793.EBB91AF6.F1D5CDE8.81F9B986 online
[dev   8,  18] /dev/sdb2        66E0C793.EBB91AF6.F1D5CDE8.81F9B986 good
[dev   8,   2] /dev/sda2        66E0C793.EBB91AF6.F1D5CDE8.81F9B986 good

Leave a comment