Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Move mdadm Raid1 to single Disk
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Installing Gentoo
View previous topic :: View next topic  
Author Message
chrisk2305
Tux's lil' helper
Tux's lil' helper


Joined: 05 Sep 2007
Posts: 88

PostPosted: Wed Dec 21, 2016 12:00 pm    Post subject: Move mdadm Raid1 to single Disk Reply with quote

Hi Guys,

hope you can help. I want to move my exisiting system raid1 (boot and root) to a single M.2 SSD. Can you give me advise how to achive that while getting completely rid of any raid leftovers on the new SSD?

Thanks,
Chris
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 43178
Location: 56N 3W

PostPosted: Wed Dec 21, 2016 3:31 pm    Post subject: Reply with quote

chrisk2305,

The easy way is to run the raid in degraded mode. You will still nave raid leftovers though.

The short answer to getting rid of the raid leftovers is to copy the data off the raid, reformat the SSD and copy the data back.
There are longer answers too, but they depend on which version of the raid superblock you are using.
Code:
mdadm -E /dev/...
will show that.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
chrisk2305
Tux's lil' helper
Tux's lil' helper


Joined: 05 Sep 2007
Posts: 88

PostPosted: Wed Dec 21, 2016 3:58 pm    Post subject: Reply with quote

very intersting:

Code:

fileserver christian # mdadm -E /dev/md125
mdadm: No md superblock detected on /dev/md125.
fileserver christian # mdadm -E /dev/md126
mdadm: No md superblock detected on /dev/md126.
fileserver christian # cat /proc/mdstat
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md126 : active raid1 sdb1[1] sda1[0]
      131008 blocks [2/2] [UU]

md125 : active raid1 sdb3[1] sda3[0]
      56386368 blocks [2/2] [UU]

md0 : active raid1 sda4[1] sdb4[0]
      117144576 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

unused devices: <none>


But I know that is Superblock 0.9 for raid autodetect to work.
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 43178
Location: 56N 3W

PostPosted: Wed Dec 21, 2016 8:45 pm    Post subject: Reply with quote

chrisk2305,

Point mdadm -E at one of the underlying blocx devices.
Code:
$ sudo mdadm -E /dev/sda1
Password:
/dev/sda1:
          Magic : a92b4efc
        Version : 0.90.00
...


With superblock 0.9, the filesystem starts in the normal place and the raid information is at the end of the block device.
You can mount the partition normally. You can use mdadm to erase the raid superblock, however, this won't add the recovered space to your filesystem.
Its not much anyway, so its probably not worth the trouble.

With superblock 1.2, the raid information is at the start of the block device and the filesystem follows it.
You can almost but not quite, mount the filesystem normally. The partition table does not give the filesystem start on the disk.
You can remove the raid superblock, then edit the partition table or pass mount the offset= option, so it finds the filesystem start.
The offset= option cannout be used to mount root, well, not with a standard initrd anyway.

The raid superblock option matters.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
chrisk2305
Tux's lil' helper
Tux's lil' helper


Joined: 05 Sep 2007
Posts: 88

PostPosted: Wed Dec 21, 2016 9:05 pm    Post subject: Reply with quote

Alright, so here Superblock 0.90 is confirmed:

Code:

fileserver christian # mdadm -E /dev/sda1
/dev/sda1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 0dc7b4d0:9a7c2a97:c44c77eb:7ee19756
  Creation Time : Sun Mar 11 20:45:44 2012
     Raid Level : raid1
  Used Dev Size : 131008 (127.94 MiB 134.15 MB)
     Array Size : 131008 (127.94 MiB 134.15 MB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 126

    Update Time : Wed Dec 14 14:25:37 2016
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 3c4d8875 - correct
         Events : 90


      Number   Major   Minor   RaidDevice State
this     0       8        1        0      active sync   /dev/sda1

   0     0       8        1        0      active sync   /dev/sda1
   1     1       8       17        1      active sync   /dev/sdb1
fileserver christian # mdadm -E /dev/sdb1
/dev/sdb1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 0dc7b4d0:9a7c2a97:c44c77eb:7ee19756
  Creation Time : Sun Mar 11 20:45:44 2012
     Raid Level : raid1
  Used Dev Size : 131008 (127.94 MiB 134.15 MB)
     Array Size : 131008 (127.94 MiB 134.15 MB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 126

    Update Time : Wed Dec 14 14:25:37 2016
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 3c4d8887 - correct
         Events : 90


      Number   Major   Minor   RaidDevice State
this     1       8       17        1      active sync   /dev/sdb1

   0     0       8        1        0      active sync   /dev/sda1
   1     1       8       17        1      active sync   /dev/sdb1
fileserver christian # mdadm -E /dev/sda3
/dev/sda3:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 0be553c9:12ecef82:c44c77eb:7ee19756
  Creation Time : Sun Mar 11 20:46:01 2012
     Raid Level : raid1
  Used Dev Size : 56386368 (53.77 GiB 57.74 GB)
     Array Size : 56386368 (53.77 GiB 57.74 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 125

    Update Time : Wed Dec 21 22:05:03 2016
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
       Checksum : b6412f64 - correct
         Events : 40190


      Number   Major   Minor   RaidDevice State
this     0       8        3        0      active sync   /dev/sda3

   0     0       8        3        0      active sync   /dev/sda3
   1     1       8       19        1      active sync   /dev/sdb3
fileserver christian # mdadm -E /dev/sdb3
/dev/sdb3:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 0be553c9:12ecef82:c44c77eb:7ee19756
  Creation Time : Sun Mar 11 20:46:01 2012
     Raid Level : raid1
  Used Dev Size : 56386368 (53.77 GiB 57.74 GB)
     Array Size : 56386368 (53.77 GiB 57.74 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 125

    Update Time : Wed Dec 21 22:05:03 2016
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
       Checksum : b6412f76 - correct
         Events : 40190


      Number   Major   Minor   RaidDevice State
this     1       8       19        1      active sync   /dev/sdb3

   0     0       8        3        0      active sync   /dev/sda3
   1     1       8       19        1      active sync   /dev/sdb3
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 43178
Location: 56N 3W

PostPosted: Wed Dec 21, 2016 10:01 pm    Post subject: Reply with quote

chrisk2305,

One last potential complication. Is the raid set partitioned itself?
Its possible to make a raid set out of partitions, then partition the resulting raid set.
If you have done this, you will have a single partition containing several filesystems, laid out sequentially in the partition.
The kernel will only find the first one. Thats probably a very bad thing as the parttion table inside the raid will disappear with the raid data.
It will still be there but nested partition tables are not allowed, so it can't be accessed.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
chrisk2305
Tux's lil' helper
Tux's lil' helper


Joined: 05 Sep 2007
Posts: 88

PostPosted: Wed Dec 21, 2016 10:03 pm    Post subject: Reply with quote

no, all md devices were setup seperatly. So no extra paritions on the raid itself.
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 43178
Location: 56N 3W

PostPosted: Thu Dec 22, 2016 9:33 am    Post subject: Reply with quote

chrisk2305,

Here's the test. You should be able no mount one of the raid underlying partitions quite safely with -o ro without disturbing the raid data.
It would be good not to have the raid assembled, so if root is involved, do it from a liveCD.
Stop the raids first, if required.
Code:
mount -o ro /dev/..  /mnt/<someplace>
should work and allow you to read the filesystem.
For raid metadata 0.9, its then safe to remove the raid metadata and use the partition as if it was a normal partition.
For raid metadata 1.2, the mount will fail, which is expected, as I've explained.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Installing Gentoo All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum