Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Searching for HowTo Migrate single disk to raid
View unanswered posts
View posts from last 24 hours

Goto page Previous  1, 2  
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware
View previous topic :: View next topic  
Author Message
frostschutz
Advocate
Advocate


Joined: 22 Feb 2005
Posts: 2971
Location: Germany

PostPosted: Wed Aug 12, 2015 12:03 pm    Post subject: Reply with quote

Did you update your initramfs after changing mdadm.conf?

When you type shell, do you get a shell and is mdadm available in this shell?

Which command are you using for that? I don't use genkernel myself, it needs some option to support RAID...
Back to top
View user's profile Send private message
ebnerjoh
Tux's lil' helper
Tux's lil' helper


Joined: 27 Oct 2006
Posts: 83

PostPosted: Wed Aug 12, 2015 12:26 pm    Post subject: Reply with quote

frostschutz wrote:
Did you update your initramfs after changing mdadm.conf?

When you type shell, do you get a shell and is mdadm available in this shell?

Which command are you using for that? I don't use genkernel myself, it needs some option to support RAID...


Hi!

No, I was just running
Code:
grub2-install /dev/sdb
and
Code:
grub2mkconfig /boot/grub/grub.cfg


So I guess I have to run genkernel again.
Back to top
View user's profile Send private message
ebnerjoh
Tux's lil' helper
Tux's lil' helper


Joined: 27 Oct 2006
Posts: 83

PostPosted: Wed Aug 12, 2015 1:36 pm    Post subject: Reply with quote

Hi!

I was running
Code:
genkernel all
and I noticed, that the initramfs in /boot/ was modified.

Nevertheless i get the same error: "Could not find root block device" with wrong UUID...

Br,
Johannes
Back to top
View user's profile Send private message
frostschutz
Advocate
Advocate


Joined: 22 Feb 2005
Posts: 2971
Location: Germany

PostPosted: Wed Aug 12, 2015 1:46 pm    Post subject: Reply with quote

this should help https://wiki.gentoo.org/wiki/Genkernel#Loading_LVM_or_software-RAID

genkernel --mdadm and the domdadm kernel parameter

LVM too if you use it, instead of a filesystem directly on the md.

ditto for encryption. genkernel needs to be told
Back to top
View user's profile Send private message
ebnerjoh
Tux's lil' helper
Tux's lil' helper


Joined: 27 Oct 2006
Posts: 83

PostPosted: Wed Aug 12, 2015 1:57 pm    Post subject: Reply with quote

I think the issue is somewhere else...

I created the mdadm.conf with "mdadm --examine --scan >> /etc/mdadm.conf

I have no there the 3 "MD"-Devices with each an UUID.

But If I do an
Code:
ls -la
in
Code:
/dev/disk/by-uuid
I can see a different UUID for each MD-Device...

Maybe I should take this UUIDs and paste it in mdadm.conf?
Back to top
View user's profile Send private message
frostschutz
Advocate
Advocate


Joined: 22 Feb 2005
Posts: 2971
Location: Germany

PostPosted: Wed Aug 12, 2015 3:08 pm    Post subject: Reply with quote

You are confusing MD RAID UUIDs (you see them with mdadm --examine) with filesystem UUIDs (what's actually stored on the MD).

The way I understood your issue, grub works? you select a kernel entry and it scrolls a few kernel messages on the screen? But then you're stuck in kernel/initramfs? Correct so far? Then it should be the genkernel, or missing kernel parameters, unless there is another error in one of your mdadm.conf, fstab, ... file(s).
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 44153
Location: 56N 3W

PostPosted: Wed Aug 12, 2015 5:26 pm    Post subject: Reply with quote

ebnerjoh,

The steps to mount a root on raid are as follows.

grub loads the kernel and the initrd (you must have an initrd)
grub makes its own arrangements to read the raid to do this.

Once the kernel is loaded, grub passes control to it. It mounts the initrd as its root filesystem.
With a genkernel kernel, all the modules for your hardware are loaded, so the kernel can now see your hard drives.
mdadm is called to assemble your raid sets.
At this point, the /dev/md* nodes are populated and the kernel can see your filesystems.
The initrd init script mounts root, tidies up and piviot roots to the real root.
The real init script now gets started.

If any of this breaks, you should be dropped into a rescue shell.

Do
Code:
ls /dev/sd*
do you see your hard drive partitions?
If not, something ment wrong very early in the process.

What about
Code:
cat /proc/mdstat
That will show the state of your raid sets if they are assembled.
If raid assembly failed, the kernel cannot see its root filesystem.

You should be able to assemble root by hand if you need to
Code:
mdam --assemble ...
NOT --create

Code:
cat init
will show you the end if the init script.
Execute the remaining commands to coax the box to boot.

I don't use genkernel and I hand roll my own initrd files, so I'm not sure what a genkernel initrd looks like.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
ebnerjoh
Tux's lil' helper
Tux's lil' helper


Joined: 27 Oct 2006
Posts: 83

PostPosted: Wed Aug 12, 2015 6:59 pm    Post subject: Reply with quote

Hi!

Just not to confuse you: I removed now my real disk and my cloned disk. So the (hopfully in future) raid disk is now sda

Code:
ls /dev/sd*
shows me sda1 till sda4 (bios_boot, boot, swap, root)

Code:
cat /proc/mdstat
shows NO RAID system

Code:
mdadm --assemble /dev/md3
Nothing happens

Code:
cat init
bad_msg " A fatal error has occured since ${REAL_INIT:-/sbin/init} did not boot correctly, Tryin to open a shell..."

I have no ideas anymore.

Btw.: I was also running
Code:
genkernel --mdadm all
no change

Br,
Johannes
Back to top
View user's profile Send private message
ebnerjoh
Tux's lil' helper
Tux's lil' helper


Joined: 27 Oct 2006
Posts: 83

PostPosted: Wed Aug 12, 2015 7:02 pm    Post subject: Reply with quote

frostschutz wrote:
You are confusing MD RAID UUIDs (you see them with mdadm --examine) with filesystem UUIDs (what's actually stored on the MD).

Ok, clear now.

frostschutz wrote:

The way I understood your issue, grub works? you select a kernel entry and it scrolls a few kernel messages on the screen? But then you're stuck in kernel/initramfs? Correct so far? Then it should be the genkernel, or missing kernel parameters, unless there is another error in one of your mdadm.conf, fstab, ... file(s).


Exactly, I am getting the Grub-Rescue.
What I am wondering. I was installing this system years ago with genkernel on a older KErnel Version with RAID Support. There were only two big changes since I had the degraded RAID and were I was changing from RAID to NONRAID and rebuilding the Boot-Partition of the still working Harddisk. Changed with "genkernel" to Kernel 4.x.x and changed from grub to grub2...
Back to top
View user's profile Send private message
frostschutz
Advocate
Advocate


Joined: 22 Feb 2005
Posts: 2971
Location: Germany

PostPosted: Wed Aug 12, 2015 7:03 pm    Post subject: Reply with quote

Output of:

cat /proc/cmdline /proc/mdstat /etc/mdadm.conf /etc/mdadm/mdadm.conf
mdadm --verbose --assemble --scan

?

My guess is domdadm or some other parameter missing but I'm not an expert for genkernel...
Back to top
View user's profile Send private message
ebnerjoh
Tux's lil' helper
Tux's lil' helper


Joined: 27 Oct 2006
Posts: 83

PostPosted: Wed Aug 12, 2015 7:46 pm    Post subject: Reply with quote

Code:
cat /proc/cmdline

-->
Code:

BOOT_IMAGE=/kernel-genkernel-x86_64-4.0.5-gentoo root=UUID=UUID-of-/dev/md3 ro domdadm dolvm


Code:
cat /proc/mdstat

-->
Code:
no raid


Code:
cat /etc/mdadm.conf

-->
Code:

# mdadm configuration file
#
# mdadm will function properly without the use of a configuration file,
# but this file is useful for keeping track of arrays and member disks.
# In general, a mdadm.conf file is created, and updated, after arrays
# are created. This is the opposite behavior of /etc/raidtab which is
# created prior to array construction.
#
#
# the config file takes two types of lines:
#
#   DEVICE lines specify a list of devices of where to look for
#     potential member disks
#
#   ARRAY lines specify information about how to identify arrays so
#     so that they can be activated
#
# You can have more than one device line and use wild cards. The first
# example includes SCSI the first partition of SCSI disks /dev/sdb,
# /dev/sdc, /dev/sdd, /dev/sdj, /dev/sdk, and /dev/sdl. The second
# line looks for array slices on IDE disks.
#
#DEVICE /dev/sd[bcdjkl]1
#DEVICE /dev/hda1 /dev/hdb1
#
# If you mount devfs on /dev, then a suitable way to list all devices is:
#DEVICE /dev/discs/*/*
#
#
# The AUTO line can control which arrays get assembled by auto-assembly,
# meaing either "mdadm -As" when there are no 'ARRAY' lines in this file,
# or "mdadm --incremental" when the array found is not listed in this file.
# By default, all arrays that are found are assembled.
# If you want to ignore all DDF arrays (maybe they are managed by dmraid),
# and only assemble 1.x arrays if which are marked for 'this' homehost,
# but assemble all others, then use
#AUTO -ddf homehost -1.x +all
#
# ARRAY lines specify an array to assemble and a method of identification.
# Arrays can currently be identified by using a UUID, superblock minor number,
# or a listing of devices.
#
#   super-minor is usually the minor number of the metadevice
#   UUID is the Universally Unique Identifier for the array
# Each can be obtained using
#
#    mdadm -D <md>
#
#ARRAY /dev/md0 UUID=3aaa0122:29827cfa:5331ad66:ca767371
#ARRAY /dev/md1 super-minor=1
#ARRAY /dev/md2 devices=/dev/hda1,/dev/hdb1
#
# ARRAY lines can also specify a "spare-group" for each array.  mdadm --monitor
# will then move a spare between arrays in a spare-group if one array has a failed
# drive but no spare
#ARRAY /dev/md4 uuid=b23f3c6d:aec43a9f:fd65db85:369432df spare-group=group1
#ARRAY /dev/md5 uuid=19464854:03f71b1b:e0df2edd:246cc977 spare-group=group1
#
# When used in --follow (aka --monitor) mode, mdadm needs a
# mail address and/or a program.  This can be given with "mailaddr"
# and "program" lines to that monitoring can be started using
#    mdadm --follow --scan & echo $! > /var/run/mdadm
# If the lines are not found, mdadm will exit quietly
#MAILADDR root@mydomain.tld
#PROGRAM /usr/sbin/handle-mdadm-events
MAILADDR ...

#ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=4d49317a:be667516:073e21cd:ed2abb54 devices=/dev/sda1,/dev/sdb1
#ARRAY /dev/md3 level=raid1 num-devices=2 metadata=0.90 UUID=9da9a034:276f2d3a:073e21cd:ed2abb54 devices=/dev/sda3,/dev/sdb3
#ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 devices=/dev/sda1,/dev/sdb1
#ARRAY /dev/md3 level=raid1 num-devices=2 metadata=0.90 devices=/dev/sda3,/dev/sdb3
#ARRAY /dev/md1 UUID=e153228e:bb28606e:e73ef6c3:51f2c869
#ARRAY /dev/md3  metadata=1.2 UUID=73450d44:fa110408:1359c12d:906df243 name=gandalf:3
ARRAY /dev/md1  UUID=543741fa:229fd201:652e20a9:5b86c2aa
ARRAY /dev/md2  UUID=798d9c97:60fc60ac:863fea75:204eecf5
ARRAY /dev/md3  UUID=4513201e:b67be639:1bd70490:b0072bc3


Code:
mdadm --verbose --assemble --scan

-->
This has maybe an interesting output, but unfortunaltey I am not able to see on the "Remote MAnagement" of my Microserver the full output. I was also not able o capture a picture with the "PRINT" key...

[/code]
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 44153
Location: 56N 3W

PostPosted: Wed Aug 12, 2015 8:02 pm    Post subject: Reply with quote

ebnerjoh,

Code:
mdadm --verbose --assemble --scan

will assemble and start all the raid sets it can find.
If it worked
Code:
cat /proc/mdstat
will exist.

I'm still not convinced you don't have your UUIDs mixed up. /sbin/blkid will show both the UUID of the raid and the UUID of the filesystem on the raid set.

Code:
$ /sbin/blkid
/dev/sda1: UUID="9392926d-6408-6e7a-8663-82834138a597" TYPE="linux_raid_member" PARTUUID="0553caf4-01"
/dev/sda2: UUID="b6633d8e-41ef-4485-9bbe-c4c2d69f4e8c" TYPE="swap" PARTUUID="0553caf4-02"
/dev/sda5: UUID="5e3cadd4-cfd2-665d-9690-1ac76d8f5a5d" TYPE="linux_raid_member" PARTUUID="0553caf4-05"
/dev/sda6: UUID="9657e667-5b60-f6a3-0391-65e6dcf662fa" TYPE="linux_raid_member" PARTUUID="0553caf4-06"
/dev/sdb1: UUID="9392926d-6408-6e7a-8663-82834138a597" TYPE="linux_raid_member" PARTUUID="0553caf4-01"
/dev/sdb2: UUID="a5d62e51-ef8c-4b9d-a4cf-faf56dcaa999" TYPE="swap" PARTUUID="0553caf4-02"
/dev/sdb5: UUID="5e3cadd4-cfd2-665d-9690-1ac76d8f5a5d" TYPE="linux_raid_member" PARTUUID="0553caf4-05"
/dev/sdb6: UUID="9657e667-5b60-f6a3-0391-65e6dcf662fa" TYPE="linux_raid_member" PARTUUID="0553caf4-06"
/dev/md125: UUID="741183c2-1392-4022-a1d3-d0af8ba4a2a8" TYPE="ext2"
/dev/md126: UUID="ff5730d5-c28d-4276-b300-5b0b0fc60300" TYPE="ext4"
/dev/md127: UUID="7b2KgY-NHef-kuNk-WBAp-VnLa-h03A-b4ehGy" TYPE="LVM2_member"


The UUID reported for /dev/sd[ab]1 is identical. This is the UUID of the raid set which mdadm.conf needs.
In my case this is /boot
These UUIDs are fed to mdadm to assemble the raid set(s)
Once the raid set is assembled /dev/md125 exists and the kernel can see the filesystem with UUID="741183c2-1392-4022-a1d3-d0af8ba4a2a8" that is stored there.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
frostschutz
Advocate
Advocate


Joined: 22 Feb 2005
Posts: 2971
Location: Germany

PostPosted: Wed Aug 12, 2015 8:19 pm    Post subject: Reply with quote

Even if no raids were assembled, /proc/mdstat should say something like

Code:

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]


to show which RAID levels are supported.

If this is not the case the appropriate modules might not be loaded, or the kernel might not support RAID.

Code:

$ gunzip < /proc/config.gz | grep MD_RAID # or zgrep MD_RAID /proc/config.gz if available
CONFIG_MD_RAID0=y
CONFIG_MD_RAID1=y
CONFIG_MD_RAID10=y
CONFIG_MD_RAID456=y


As for catching output in the Initramfs shell, have an USB stick, mount it, somecommand > /mnt/usb/file.txt should work - if USB and the filesystem is supported. Oh well.

I don't use modules :lol: stupid little buggers never loaded when you need em
Back to top
View user's profile Send private message
ebnerjoh
Tux's lil' helper
Tux's lil' helper


Joined: 27 Oct 2006
Posts: 83

PostPosted: Thu Aug 13, 2015 4:11 am    Post subject: Reply with quote

frostschutz wrote:
Even if no raids were assembled, /proc/mdstat should say something like

Code:

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]


to show which RAID levels are supported.

If this is not the case the appropriate modules might not be loaded, or the kernel might not support RAID.


Yes, this is shown, and on the next line
Code:
unused devices: <none>


Your second question was also interesting. The command showed me, that the RAID-Settings were set as module. But when running
Code:
genkernel --menuconfig --mdadm all
I saw for all RAID Options
Code:
*
...

Will play around with that now

Br,
Johannes
Back to top
View user's profile Send private message
ebnerjoh
Tux's lil' helper
Tux's lil' helper


Joined: 27 Oct 2006
Posts: 83

PostPosted: Thu Aug 13, 2015 6:24 am    Post subject: Reply with quote

So, it is working!!!

What have I done:

1) Copied and extracted /proc/config.gz to /usr/src/linux
2) Changed RAID Support from "m" to "*"
3) running
Code:
genkernel oldconfig --mdadm all


Thank you all for your perfect support. I didn't expected a solution anymore...

Br,
Johannes
Back to top
View user's profile Send private message
ebnerjoh
Tux's lil' helper
Tux's lil' helper


Joined: 27 Oct 2006
Posts: 83

PostPosted: Sat Aug 15, 2015 8:25 am    Post subject: Reply with quote

Hi!

I have now another question:

In the past I had two swap partitions (sda + sdb) with each 4GB and striped them via "PRIO" in fstab.

Now I changed to softwar-raid again with mirroring, but I forgot to enlarge the SWAP partition. So I have now only 4GB

Is it possible that I resize the root partition with parted so that I can then resize the SWAP partition?

Br,
Johannes
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 44153
Location: 56N 3W

PostPosted: Sat Aug 15, 2015 11:00 am    Post subject: Reply with quote

ebnerjoh,

swap needs to be raided. Otherwise, programs that have data in swap will get a lobotomy when the drive carrying their swapped data fails.
These programs will crash when they need to use tho swapped data.

If you use LVM add a swap volume, if not, use a swap file.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware All times are GMT
Goto page Previous  1, 2
Page 2 of 2

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum