Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Yet another Hybrid Raid 0 / 1 Howto for 2.6 with dmsetup
View unanswered posts
View posts from last 24 hours

Goto page Previous  1, 2, 3, 4  Next  
Reply to topic    Gentoo Forums Forum Index Documentation, Tips & Tricks
View previous topic :: View next topic  
Author Message
m@cCo
n00b
n00b


Joined: 01 Mar 2005
Posts: 9

PostPosted: Thu Mar 03, 2005 12:30 pm    Post subject: Reply with quote

So have i to create my own livecd?
Or can i load device-mapper and use dmsetup in some other way?

Thanks again
Back to top
View user's profile Send private message
m@cCo
n00b
n00b


Joined: 01 Mar 2005
Posts: 9

PostPosted: Sun Mar 06, 2005 10:32 am    Post subject: Reply with quote

Nobody?
Back to top
View user's profile Send private message
Phk
Guru
Guru


Joined: 02 Feb 2004
Posts: 428
Location: [undef], Lisbon, Portugal, Europe, Earth, SolarSystem, MilkyWay, 23Q Radius, Forward Time

PostPosted: Sun Mar 06, 2005 7:46 pm    Post subject: Reply with quote

Errrrm.... I'm having problems booting from the RAID0 partition..
And i don't have a clue on how to fix it... :cry:

Please take a look: here

Thanks rightaway...
_________________
"# cat /dev/urandom >> /tmp/life"
Back to top
View user's profile Send private message
garlicbread
Apprentice
Apprentice


Joined: 06 Mar 2004
Posts: 182

PostPosted: Wed Mar 09, 2005 1:34 pm    Post subject: Reply with quote

m@cCo wrote:
Nobody?


Method 1 use dmsetup on the live CD to access the raid array
this is difficult as this is the "manual" method, you'll need to figure out what maps to pass to dmsetup prior to using it

Method 2 create your own live CD with dmraid on using catalyst (assuming dmraid recognises your array)
no idea how to do this at this point

For myself, I have a few spare IDE drives knocking about so I've installed an initial system onto one of these (not Raid'd) and I'm planning on moving everything across from this spare disk to the final Raid array once it's sorted

I've nearly finished another Howto for evms / udev / dmraid on an initramfs image (using genkernel as an initial step). So I'll probably be looking into how to use catalyst next to create an emergency boot / Live CD
Back to top
View user's profile Send private message
Gruffi
Apprentice
Apprentice


Joined: 15 Aug 2003
Posts: 209
Location: Antwerpen - Flanders - Belgium

PostPosted: Sun Mar 20, 2005 2:59 pm    Post subject: Reply with quote

Hello garlicbread,

I have been trying to get raid0 to work on my Asus A8V Deluxe.
When i boot the gen2raid cd i can see my windows striped partition with no problem, however the cd does not support chrooting from an amd64 cpu.
When i boot from an IDE harddisk and load the same modules the gen2raid cd loads mount says it is not a valid partition.
I think i misconfigured something in the kernel or i'm loading the wrong modules.

Would you post your .config please?

Thanks :D

Gruffi Gummi
_________________
... and we will show Microsoft, that they cannot take whatever they want. And that Free Software is our software!
Back to top
View user's profile Send private message
garlicbread
Apprentice
Apprentice


Joined: 06 Mar 2004
Posts: 182

PostPosted: Tue Mar 22, 2005 4:52 pm    Post subject: Reply with quote

Gruffi wrote:
Hello garlicbread,

I have been trying to get raid0 to work on my Asus A8V Deluxe.
When i boot the gen2raid cd i can see my windows striped partition with no problem, however the cd does not support chrooting from an amd64 cpu.
When i boot from an IDE harddisk and load the same modules the gen2raid cd loads mount says it is not a valid partition.
I think i misconfigured something in the kernel or i'm loading the wrong modules.

Would you post your .config please?

Thanks :D

Gruffi Gummi


I've not used gen2raid yet
but I think the relevant section in the .config for the kernel is

Code:
#
# Multi-device support (RAID and LVM)
#
CONFIG_MD=y
CONFIG_BLK_DEV_MD=y
CONFIG_MD_LINEAR=y
CONFIG_MD_RAID0=y
CONFIG_MD_RAID1=y
CONFIG_MD_RAID10=m
CONFIG_MD_RAID5=y
# CONFIG_MD_RAID6 is not set
CONFIG_MD_MULTIPATH=y
CONFIG_MD_FAULTY=m
CONFIG_BLK_DEV_DM=y
CONFIG_DM_CRYPT=y
CONFIG_DM_SNAPSHOT=y
CONFIG_DM_MIRROR=y
CONFIG_DM_ZERO=y
# CONFIG_DM_MULTIPATH is not set
CONFIG_BLK_DEV_DM_BBR=m
# CONFIG_DM_FLAKEY is not set


or within "make menuconfig"
Device Drivers -> Multi-device support (RAID and LVM) -> <options here>
Back to top
View user's profile Send private message
Erlend
Guru
Guru


Joined: 26 Dec 2004
Posts: 493

PostPosted: Wed Mar 30, 2005 11:25 am    Post subject: Reply with quote

I think partition-mapper.sh script might be slightly broken.

This line is giving the problem:
Code:

 print 0, size, "linear", base, start | ("dmsetup create " dev);


Not sure exactly why though, as I'm not great with sh or awk.

Erlend
Back to top
View user's profile Send private message
garlicbread
Apprentice
Apprentice


Joined: 06 Mar 2004
Posts: 182

PostPosted: Wed Mar 30, 2005 10:32 pm    Post subject: Reply with quote

Erlend wrote:
I think partition-mapper.sh script might be slightly broken.

This line is giving the problem:
Code:

 print 0, size, "linear", base, start | ("dmsetup create " dev);


Not sure exactly why though, as I'm not great with sh or awk.

Erlend


One option is to grab kpartx from multipath-tools ebuild on bugzilla
kpartx -l /dev/mapper/<drive> will list the partitions it will create
kpartx -a /dev/mapper/<drive> will add them
kpartx -d /dev/mapper/<drive> will remove them

I've recently done a re-write of partition-mapper.sh so that it behaves just like kpartx
bearing in mind it uses sfdisk / bash / awk / dmsetup so make sure these are installed

this is the new version

EDIT
this should now work without having to specify the full path to the /dev/mapper/<node>
e.g. cd /dev/mapper/
partition-mapper.sh -a pdc_rd1_gbon
should now work as well

EDIT2
New busybox freindly version

Code:

#!/bin/sh
# This script emulates the behavior of kpartx using sfdisk / dmsetup
# Richard Westwell <garlicbread@ntlworld.com>

SFDISK_CMD="/sbin/sfdisk"
DMSETUP_CMD="/sbin/dmsetup"

ID='$Id: partition_mapper,v 1.0 2005/20/01 00:00:00 genone Exp $'
VERSION=0.`echo ${ID} | cut -d\  -f3`
PROG=`basename ${0}`
verb="0"
mode=""
delimiter=""

map_sfdisk() {
   process_list "" | while read line; do
      eval "local ${line}"
      sfdev_num=${sfdev#${sfdev_base}}
      fulldevnode="${sfdev_base}${delimiter}${sfdev_num}"
      if [ "${mode}" = "list" ];then
         echo "${fulldevnode} : 0 ${sfsize} ${device_node} ${sfstart}"
      elif [ "${mode}" = "delete" ];then
         [ "${verb}" -gt "0" ] && echo "del devmap : ${fulldevnode}"
         ${DMSETUP_CMD} remove "${fulldevnode}"
      elif [ "${mode}" = "add" ];then
         [ "${verb}" -gt "0" ] && echo "add map ${fulldevnode} : 0 ${sfsize} linear ${device_node} ${sfstart}"
         echo "0 ${sfsize} linear ${device_node} ${sfstart}" | "${DMSETUP_CMD}" create ${fulldevnode}
      fi
   done
}

process_list() {
# put together a list of variables to export for each partition
base_dev_name=`basename ${device_node}`
${SFDISK_CMD} -l -uS ${device_node} 2>/dev/null | awk '/^\// {   
   if ( $2 == "*" ) {start = $3;size = $5;}
   else {start = $2;size = $4;}
   if ( size == 0 ) next;
   ("basename "  $1) | getline dev;
   print \
   "sfstart=\""start"\";", \
   "sfsize=\""size"\";", \
   "sfdev=\""dev"\";" \
   "sfdev_base=\""base_dev_name"\";" \
   }' base_dev_name=${base_dev_name}
}

usage() {
   echo "${PROG} v. ${VERSION}
usage : ${PROG} [-a|-d|-l] [-v] wholedisk
        -a add partition devmappings
        -d del partition devmappings
        -l list partitions devmappings that would be added by -a
        -p set device name-partition number delimiter
        -v verbose"
   exit 1
}


###########
#Parse Args
###########

params=${#}
while [ ${#} -gt 0 ]
do
   a=${1}
   shift
   case "${a}" in

   -a)
      mode="add"
      device_node=${1}
      shift
      ;;
   -d)
      mode="delete"
      device_node=${1}
      shift
      ;;
   -l)
      mode="list"
      device_node=${1}
      shift
      ;;
   -p)
      delimiter=${1}
      shift
      ;;

   -v)
      let $((verb++))
      ;;
   -*)
      echo "${PROG}: Invalid option ${a}" 1>&2
      usage=y
      break
      ;;
   *)
      # Anything else just ignore
      ;;
   esac
done

[ ! -n "${mode}" ] && usage=y
[ ! -n "${device_node}" ] && usage=y
[ "${usage}" ] && usage

# make sure this is the full Absolute path
bas_nm=`basename ${device_node}`
dir_nm=`dirname ${device_node}`
[ "${dir_nm}" = "." ] && dir_nm=`pwd`
device_node="${dir_nm}/${bas_nm}"

if [ ! -b "${device_node}" ];then
   echo "${PROG}: unable to access device: ${device_node}" 1>&2
   exit 1
fi
map_sfdisk


I'm also working on re-writing the startup scripts so that settings for dmsetup / dmraid can be pulled from /etc/dmtab which is included with some of the newer device-mapper ebuilds
(although this has involved altering the layout of /etc/dmtab a little bit, but it's not in full use anyway yet)


Last edited by garlicbread on Thu Apr 07, 2005 11:00 pm; edited 2 times in total
Back to top
View user's profile Send private message
Erlend
Guru
Guru


Joined: 26 Dec 2004
Posts: 493

PostPosted: Wed Mar 30, 2005 11:46 pm    Post subject: Reply with quote

That script is great, thanks. I run it and it works (I've been wondering for some time now, why does dmraid do a lot of fancy metadata stuff when it is possible to just use a script like this? Is dmraid supposed to be safer?).

The only thing with your script, it didn't run without editing...
Code:
map_sfdisk() {
   while read line; do
      eval "local ${line}"
      sfdev_num=${sfdev#${sfdev_base}}
      fulldevnode="${sfdev_base}${delimiter}${sfdev_num}"
      if [ ${mode} = "list" ];then
         echo "${fulldevnode} : 0 ${sfsize} ${device_node} ${sfstart}"
      elif [ ${mode} = "delete" ];then
         [ "${verb}" -gt "0" ] && echo "del devmap : ${fulldevnode}"
         ${DMSETUP_CMD} remove "${fulldevnode}"
      elif [ ${mode} = "add" ];then
         [ "${verb}" -gt "0" ] && echo "add map ${fulldevnode} : 0 ${sfsize} linear ${device_node} ${sfstart}"
         echo "0 ${sfsize} linear ${device_node} ${sfstart}" | ${DMSETUP_CMD} create "${fulldevnode}"
      fi
   done <<<"$(process_list)"
}


As you can see I've changed:
Code:
echo "0 ${sfsize} linear ${sfdev_base} ${sfstart}" | ${DMSETUP_CMD} create "${fulldevnode}"

to
Code:
echo "0 ${sfsize} linear ${device_node} ${sfstart}" | ${DMSETUP_CMD} create "${fulldevnode}"


As I think you need to access the device as /dev/mapper/name rather than just name.

Cheers,

Erlend
Back to top
View user's profile Send private message
garlicbread
Apprentice
Apprentice


Joined: 06 Mar 2004
Posts: 182

PostPosted: Thu Mar 31, 2005 11:35 am    Post subject: Reply with quote

thanks for the amendment

as far as I know I think dmraid does do some checking of the metadata
e.g. if the Bios has marked a Raid 1 Array as being out of sync then I think dmraid will refuse to activate it
for Raid 0 there is no checking as such (since Raid 0 has no redundency anyway)

also I think it can pick up as to which drives belong to which arrays
e.g. I have 4 drives that make up 2 arrays
/dev/sda /dev/sdb (Via Raid 0) - mapped via dmraid
/dev/sdc /dev/sdd (PDC Raid 1) - mapped via dmsetup (as dmraid thinks the array is half the size it should be)
if one of the drives in the middle e.g. sdb went missing then this would mess up the whole mapping for both Arrays if dmsetup was used, but dmraid should be able to distinguish which drives belong to which arrays by observing the metadata

One thing I plan on doing is using udev to create symbolic links for the individual drives based on the drive serial number, and then to pass the symbolic links to dmsetup for the mapping
that way you could completly re-arrange the drives (the Bios or Windows might not like that, but for Linux it would just auto map the correct drives to the correct links based on the serial no)
the problem is, is that the serial no for Sata drives is not currently visible under sysfs so I may have to find a utility that can display the serial number based on the major / minor device number (dmraid has the ability to show this info based on the block device name so I may be able to write a wrapper script to sit inbetween udev and dmraid or some other util)
Back to top
View user's profile Send private message
zpet731
Tux's lil' helper
Tux's lil' helper


Joined: 24 Mar 2004
Posts: 133
Location: Sydney Australia / Belgrade Serbia

PostPosted: Thu Mar 31, 2005 1:39 pm    Post subject: Reply with quote

Hi, I posted to another thread earlier, actually two but they say three is lucky. Well actually I just want to see what works best before I start installing.

I currently built a:

AMD64 system 3200+
GA-N8NF-9 motherboard
6600 GT graphics card
1GB RAM
2 SATA 160GB drives

Now, Im only planning to run gentoo on this system so no dual boot or anything.

I've read quite a bit on SATA raid threads, most of the threads are excellent but I still need a few things answered before I start installing gentoo on it. Im using a minimal 2005.0 image that I downloaded off the net.

Now if I am to use Raid 0 setup what is my best option do I use the motherboard raid or not? Im not sure which way is better so hopefully someone can enlighten me on this issue.

Also if I am to use the software raid and control it completely from linux do I need to disable the raid in the bios? My motherboard asks me to setup the array each time I boot up and the sata raid is enabled by default. Can someone explain what needs to be done. Thanks!!!

would also like to know which one is faster (BIOS or Kernel) and has less strain on the CPU, or is it the same?
_________________
" Invention is the most important product of man's creative brain. The ultimate purpose is the complete mastery of mind over the material world, the harnessing of human nature to human needs."
Nikola Tesla
Back to top
View user's profile Send private message
garlicbread
Apprentice
Apprentice


Joined: 06 Mar 2004
Posts: 182

PostPosted: Thu Mar 31, 2005 3:02 pm    Post subject: Reply with quote

If your planning to dual boot windows / Linux on the same array then setting up the array via the Bios is the way to go
If this is just for Linux (no Windows) then software Raid is easier to setup (especially if you use evms)

something to bear in mind is that Linux cannot access the drives through the Bios
for Linux only software Raid, you simply turn off the bios raid support and set it up within Linux
for Linux Raid support that co-exists with the Bios setup (which is needed to dual boot) you can ether use device mapper to read / write the data to the disks the same way the Bios would do
or use a Linux software Raid setup that has the metadata section turned off which essentialy does the same thing (although this is actually more difficult to setup I think)

in terms of speed
I've found that a Bios raid array accessed via device mapper (dmraid / dmsetup) tends to be a bit faster than software raid
this is the same link that's on the bottom of the first message graph
this has some results for a pair of Maxtor 200Gb sata drives (haven't got around to benchmarking my 10K raptor drives yet however)
for Raid 1 it looks as if the results are not much different between sofware / device-mapper
for Raid 0 it looks as if the results are around 89K flat line for Software Raid (via) and the max speed of the disk for Device-mapper raid (the stepping on the graph represents the actual zones of the disk which means the full capacity is being used, ranging from 120K at the start down to 89K)
Back to top
View user's profile Send private message
Erlend
Guru
Guru


Joined: 26 Dec 2004
Posts: 493

PostPosted: Thu Mar 31, 2005 4:39 pm    Post subject: Reply with quote

garlicbread, does your script work on extended partitions?

Is that graph showing read/write speed from beginning to end of drive? If so, why does Promise Linear Raid 0 64KS 400GB Software not get slower towards the end of the drive? Oh, and out of curiousity, since you've done benchmarks, what would you expect to get out of 2 Seagate Baracudas with Promise FastTrak raid 0? I'm getting about 70 MB/s with that, but that is after the first 120GB (or 240GB array).

Thanks,

Erlend
Back to top
View user's profile Send private message
garlicbread
Apprentice
Apprentice


Joined: 06 Mar 2004
Posts: 182

PostPosted: Fri Apr 01, 2005 7:59 am    Post subject: Reply with quote

both kpartx and the above script should map primary and extended parititions, dmraid seems to do this okay as well for the moment
I think the only difference is that while kpartx skips the 5th one (the one that normaly represents the entire extended region) the script will map this in as well. Both appear to map any partitions within the extended region without a problem

I've noticed that for the Promise controller this appears to be slower than the VIA chipset on my own mobo, I think this is due to the promise connecting through the PCI bus (so I think it depends on how the chipset is connected)
also dmraid has a tendency to map the main drive node for Promise to half the size it should be (which in turn stops the partitions from being mapped properly) so I still have to use dmsetup to map the main drive node for the Promise controller

for Raid 0 on the Promise controller
through device-mapper this can be seen as the turqoise line just above the red one which starts off at approx 95Mb/s and then trails off at the end (although it's masked by the red line in front) down to about 82Mb/s
I think this means it's bottlenecking at the beginning of the disk (as it's a flat line) due to the PCI bus and then near the end the combined drive speed is less than the avaialble bus speed which is why it zones down

through Software Raid it seems to flat line at about 69MB/s, I suspect this doesn't zone down at the end as it's already beneath the total drive speed capability all the way through (which is approx 82Mb/s minimum from looking at the previous graph)
Back to top
View user's profile Send private message
zpet731
Tux's lil' helper
Tux's lil' helper


Joined: 24 Mar 2004
Posts: 133
Location: Sydney Australia / Belgrade Serbia

PostPosted: Sun Apr 03, 2005 3:12 am    Post subject: Reply with quote

Thanks Garlicbread,

I decided to go for software raid as I am only using linux and even though your benchmarks show that for bios raid 0 you get slightly better performance. After I completed everything I was very satisfied with the results and achieved a 54-58MB/s for individual hard disks and a 106-112MB/s for raid 0. Therefore I'm a very happy gentoo user... :P
_________________
" Invention is the most important product of man's creative brain. The ultimate purpose is the complete mastery of mind over the material world, the harnessing of human nature to human needs."
Nikola Tesla
Back to top
View user's profile Send private message
movrev
Tux's lil' helper
Tux's lil' helper


Joined: 07 Mar 2004
Posts: 114
Location: Berkeley, CA - USA

PostPosted: Sun Apr 03, 2005 4:13 am    Post subject: nForce4 RAID 1 with 2 Maxtor 200GB SATA Drives... Reply with quote

I have read almost the whole thread, but cannot make my mind regarding RAID.

I am dual-booting winXP and gentoo64, so I set up RAID 1 through the BIOS and used the provided drivers for the windows install without a problem (I am booting from this array). I am now ready to install gentoo64, but I don't know what exactly I should use. My array has 1 primary and 1 extended with 6 logical drives. Windows is installed in the last logical drive, and the primary drive is going to be /boot.

At this point in time, is dmraid good enough for what I need to do, or should I go with dmsetup. Also, is there any quick way of setting up an initrd or something that would let me use the array for booting? Also, from what I understand, the method that includes device mapper for RAID 1 does read and write at the same speeds of one hard drive, right? So, technically, the only good thing about this setup is data resiliency (permanent backup) and nothing more. I would love to have it read a part from each disk so as to double read speed as well.

Lamentably, I am starting to think of not using this fake RAID at all :(...it seems too hard to maintain plus maybe being unstable.
Back to top
View user's profile Send private message
flipy
Apprentice
Apprentice


Joined: 15 Jul 2004
Posts: 229

PostPosted: Sun Apr 03, 2005 7:39 am    Post subject: Re: nForce4 RAID 1 with 2 Maxtor 200GB SATA Drives... Reply with quote

movrev wrote:
I have read almost the whole thread, but cannot make my mind regarding RAID.

I am dual-booting winXP and gentoo64, so I set up RAID 1 through the BIOS and used the provided drivers for the windows install without a problem (I am booting from this array). I am now ready to install gentoo64, but I don't know what exactly I should use. My array has 1 primary and 1 extended with 6 logical drives. Windows is installed in the last logical drive, and the primary drive is going to be /boot.

At this point in time, is dmraid good enough for what I need to do, or should I go with dmsetup. Also, is there any quick way of setting up an initrd or something that would let me use the array for booting? Also, from what I understand, the method that includes device mapper for RAID 1 does read and write at the same speeds of one hard drive, right? So, technically, the only good thing about this setup is data resiliency (permanent backup) and nothing more. I would love to have it read a part from each disk so as to double read speed as well.

Lamentably, I am starting to think of not using this fake RAID at all :(...it seems too hard to maintain plus maybe being unstable.

well, first, you can use dmraid, try to download the gen2dmraid 0.99a (which uses pure udev) and see if it autodetects your raid and check the size of the partitions (AFAIK dmraid had a bug with more than 4 partitions... but try it anyway).
moreover, RAID 1 is for data consistency, so if you ever have any problems with the primary disk, the raid should detect and correct that.
RAID 0 is for speed, and will increase your read/write to almost 180%.
Setting up the RAID 0 following garlicbread's steps it's quite easy, but i've downloaded the dmraid initrd and hack it (i think someone posted how to do that on the 1st page).
_________________
Si no entiendes algo leete detenidamente el Handbook.
Back to top
View user's profile Send private message
movrev
Tux's lil' helper
Tux's lil' helper


Joined: 07 Mar 2004
Posts: 114
Location: Berkeley, CA - USA

PostPosted: Sun Apr 03, 2005 6:07 pm    Post subject: Re: nForce4 RAID 1 with 2 Maxtor 200GB SATA Drives... Reply with quote

flipy wrote:

well, first, you can use dmraid, try to download the gen2dmraid 0.99a (which uses pure udev) and see if it autodetects your raid and check the size of the partitions (AFAIK dmraid had a bug with more than 4 partitions... but try it anyway).
moreover, RAID 1 is for data consistency, so if you ever have any problems with the primary disk, the raid should detect and correct that.


Booting with gen2dmraid 0.99a gave me all my partitions as far as I can tell:


    Raid 1 Array (2 x 200GB Maxtor SATA)

    nvidia_dagcbaeb (mapper devide)
    nvidia_dagcbaeb1
    nvidia_dagcbaeb5
    nvidia_dagcbaeb6
    nvidia_dagcbaeb7
    nvidia_dagcbaeb8
    nvidia_dagcbaeb9
    nvidia_dagcbaeb10


Correct me if I am wrong, but these seem to be the devices that map onto my two hard drives, right? I don't know where he got the names out of, but all I can tell is that it technically did recognize well my primary partition and the 6 logical partitions inside the extended one.

I used fdisk on the nvidia_dagcbaeb device and the sizes are exactly what I had formatted the different partitions to be. So, I booted with the gen2dmraid + vga option and it autodetected my network and gave me a simple framebuffer, which I could technically use to install.

However, you guys are talking about hacking the initrd to make it recognize and set up the RAID array at boot. I usually use splashtuils to make this initrd because I like to have a framebuffer + splash. Is it possible to hack this initrd to make it usable for this purposes? And I would have to hack it every time I make a new initrd, which is not that often thankfully. Would I have to mess with any other thing other than the initrd and well, the usual grub.conf?

Coming back to your last points about RAID. I understand that RAID0 gives you speed, but I want data resiliency, which confines me to RAID1. You are saying that not only will RAID1 do a permanent backup, but also correct for errors, which I understand. However, will it correct for errors in linux? Because, from what it seems to me, since we are not identifying it as RAID, is that it will only save the same thing to two hard drives at the same time (which is what we want, but a power failure will make that fail). Would I have to use the BIOS to rebuild it then? Or windows?

Also, for those of you guys running it so far. How stable is this? Because I want RAID 1 to keep a permanent backup without data corruption and maybe with an increase in read speed, and I would die if this setup in linux would ruin my data. It would just defeat the purpose of making all this effort. Thanks for your help.
Back to top
View user's profile Send private message
garlicbread
Apprentice
Apprentice


Joined: 06 Mar 2004
Posts: 182

PostPosted: Sun Apr 03, 2005 8:52 pm    Post subject: Reply with quote

I've been writing some scripts recently to allow easy editing / modification of initrd / initramfs files
as far as I know splashutils creates an initramfs file and just puts some config / image data into it which the splash driver in the kernel picks up on

but so far I've not seen widespread use of initramfs for booting a script at runtime to make boot devices visible (e.g. for evms or dmraid)
as there are some differences that need to be present within the script (i.e. run-init from klibc instead of pivot_root)
currently my system at the moment is using an initramfs image with evms and pure udev
but I still need to try device-mapper raid out with this as well
I should be releasing another howto on this pretty soon now that I've finished the below scripts


Note for resiliance it may be better to use linux software raid 1 at the moment as that's something that's been tried / tested
in terms of speed it's not much different than device-mapper raid 1, the only down side is lack of compatibility with windows

Raid 1 was primarily designed for servers for maximum up time
the idea is if one drive fails it can just be swapped out for another and it'l keep on ticking
However it's still no substitute for proper backups (if both drives fail at the same time your still screwed)
by using device-mapper raid your just trying to write to the disks under Linux the same way the bios or the official windows drivers would
but this is still fairly beta at the moment, from what I've seen of dmraid it's not entirely finished yet, and mapping things manually can be a bit tricky


here's some new scripts that may come in handy let me know what you think

first a modified version of /etc/dmtab
I've seen this included with the latest version of device-mapper, but so far it only appears to be used by /lib/rcscripts/addons/dm-start.sh
which I think is something still being worked on
I've modified the rule set by adding another field and called the config file something else to avoid confliction

/etc/dmrtab
Code:

# Format: <volume name> : <options> : <table>
# Example: isw:p: 0 312602976 striped 2 128 /dev/sda 0 /dev/sdb 0

# Modified for own use with partition-mapper and dmraid

# 1. The first field indicates the volume name for standard dmsetup type rules
#    or the program name when 'e' is present in the second option field
# 2. Second field indicates certain options for the script to process
#    valid values currently are 'e', 'p' or 'x'
#    'p' indicates that parition-mapper or kpartx should be used to map out the partitions after creation
#    'e' indicates that the rule is not for dmsetup (i.e. for dmraid)
#    'x' indicates that the rule should not be mapped with the 'all' target, it can only be mapped by it's given name
# 3. Third field indicates the table to be passed to dmsetup, or the command line options for dmraid

# Example dmsetup type mapping rules
#pdc_raid1_dev:xp: 0 398296960 mirror core 2 128 nosync 2 /dev/sda 0 /dev/sdb 0
#pdc_raid1_dev:: 0 398296960 mirror core 2 128 nosync 2 /dev/sda 0 /dev/sdb 0

# non-dmsetup rule indicated by "e" in the second field
# The first field should indicate the program name, so far dmraid is the only one supported
# The third field represents any additional command line options to be passed to dmraid
# e.g. such as -f pdc to specifiy only the "pdc" type arrays
# command line options for -an and -ay (activate/deactivate) are added in by the script automaticaly
#dmraid:e: -fvia via_bjedgggbc

# -p in the third field tells dmraid not to map the partitions itself
# p in the second field will use kpartx or partition-mapper to map out the partitions instead
#dmraid:ep: -p -fvia via_bjedgggbc


EDIT
New busybox freindly version
Changed the preference to partition-mapper.sh instead of kpartx
due to timing issue between udev and kpartx

Next a script to actually use it
/usr/local/bin/dmmap
Code:

#!/bin/sh

DMTAB="/etc/dmrtab"
PARTMAPPER_DIR="/dev/mapper"
DMSETUP_CMD="/sbin/dmsetup"
DMRAID_CMD="/usr/sbin/dmraid"
KPARTX_CMD="/sbin/kpartx"
PARTMAPPER_CMD="/usr/local/bin/partition-mapper.sh"
PM_CMD="${PARTMAPPER_CMD}"

ID='$Id: dmmap,v 1.0 2005/20/01 00:00:00 genone Exp $'
VERSION=0.`echo ${ID} | cut -d\  -f3`
PROG=`basename ${0}`
verb="0"

# used to return string values from functions
retvalue=""

map_list_target() {   
   # Filter comments and blank lines
   #each loop equals one valid entry in ${DMTAB}
   grep -n -v -e '^[[:space:]]*\(#\|$\)' "${DMTAB}" | \
   while read line_entry; do
      local test1=""
      auto_partition="false"
      dm_raidrule="false"
      all_exclude="false"
      
      # grab line number from first field (added by grep)
      get_first_field "${line_entry}"; line_number="${retvalue}"
      line_entry="${line_entry#*:}"
      
      # now grab the volume name from the next field
      get_first_field "${line_entry}"; volume_name="${retvalue}"
      line_entry="${line_entry#*:}"
      
      # grab any volume options such as if to partition the drive as well
      get_first_field "${line_entry}"; volume_option="${retvalue}"
      line_entry="${line_entry#*:}"
      
      # grab any parms to be passed to dmsetup or dmraid (remainder of the line)
      volume_parms="${line_entry}"
      
      # check to make sure that volume_name / volume_parms fields are not empty
      if [ ! -n "${volume_name}" ] || [ ! -n "${volume_parms}" ];then
         echo "${PROG}: error fields empty or incorrect number of fields, Line ${line_number}" && continue
      fi
      
      test1=`echo "${volume_option}" | grep "p" 2>/dev/null`
      [ -n "${test1}" ] && {
         auto_partition="true"
         [ -x "${KPARTX_CMD}" ] && PM_CMD="${KPARTX_CMD}"
         [ -x "${PARTMAPPER_CMD}" ] && PM_CMD="${PARTMAPPER_CMD}"
         [ ! -n "${PM_CMD}" ] && echo "${PROG}: Error unable to locate kpartx or partition-mapper script" && exit 1
         }
      
      test1=`echo "${volume_option}" | grep "e" 2>/dev/null`
      [ -n "${test1}" ] && {
         dm_raidrule="true"
         [ ! -x "${DMRAID_CMD}" ] && echo "${PROG}: Error unable to locate ${DMRAID_CMD}" && exit 1
         }
      [ "${dm_raidrule}" = "false" ] && {
         [ ! -x "${DMSETUP_CMD}" ] && echo "${PROG}: Error unable to locate ${DMSETUP_CMD}" && exit 1
         [ ! -c "/dev/mapper/control" ] && echo "${PROG}: Error unable to locate /dev/mapper/control" && exit 1
         }
      test1=`echo "${volume_option}" | grep "x" 2>/dev/null`
      [ -n "${test1}" ] && all_exclude="true"
      
      if [ ! "${list_target}" = "all" ] && [ ! "${list_target}" = "${volume_name}" ];then
         continue
      fi
      if [ "${list_target}" = "all" ] && [ "${all_exclude}" = "true" ];then
         continue
      fi
      
      # use dmraid to process the rule
      [ "${dm_raidrule}" = "true" ] && map_dmraid
      # use dmsetup to process the rule
      [ "${dm_raidrule}" = "false" ] && map_dmsetup   
   done
   return 0
}


map_dmraid() {   
   # Strip off the -p
   local list_parms=`echo "${volume_parms}" | sed "s/-p//"`
   # List of volume names that would be created
   local dmraid_volnames=`"${DMRAID_CMD}" -s "${list_parms}" | grep name | sed "s/name[ \t]*://;s/^[ \t]*//;s/[ \t]*$//"`
   
   if [ "${volume_name}" = "dmraid" ];then
      if [ "${mode}" = "add" ];then
         # Map the main drive nodes
         [ "${verb}" -gt "0" ] && echo "${PROG}: activating dmraid with parameters -ay ${volume_parms}"
         if ! ("${DMRAID_CMD}" -ay "${volume_parms}"); then
            #"${DMRAID_CMD}" -ay "${volume_parms}" || {
            echo "${PROG} there was a problem with ${DMRAID_CMD}"
            return 1
         fi
      
         # If partition-mapper / kpartx is to be used
         if [ "${auto_partition}" = "true" ];then
            echo "${dmraid_volnames}" | while read x; do
               map_part_mapper -a "${PARTMAPPER_DIR}/${x}" || return 1
            done
         fi
      
      elif [ "${mode}" = "delete" ];then
         # Remove the partition nodes if partition-mapper / kpartx is to be used
         if [ "${auto_partition}" = "true" ];then
            echo "${dmraid_volnames}" | while read x; do
               map_part_mapper -d "${PARTMAPPER_DIR}/${x}" || return 1
            done
         fi
      
         # Remove the main Drive nodes
         [ "${verb}" -gt "0" ] && echo "${PROG}: deactivating dmraid with parameters -an ${volume_parms}"
         if ! ("${DMRAID_CMD}" -an "${volume_parms}"); then
            echo "${PROG} there was a problem with ${DMRAID_CMD} during the removal of ${volume_parms}"
            return 1
         fi
      
      elif [ "${mode}" = "list" ];then
         local x=""
         [ "${auto_partition}" = "true" ] && x="p:"
         echo "${volume_name}:${x} Command Line Opts: ${volume_parms}"
         "${DMRAID_CMD}" -s -ccc "${list_parms}" 2>/dev/null | while read x; do
            echo "   $x"
         done
         echo ""
      fi
   fi   
}


map_dmsetup() {
   if [ "${mode}" = "add" ]; then
      # Skip if already mapped
      dmvolume_exists "${volume_name}" && return 0
      [ "${verb}" -gt "0" ] && echo "${PROG}: creating volume ${volume_name}:${volume_parms}"
      if ! (echo "${volume_parms}" | "${DMSETUP_CMD}" create "${volume_name}"); then
         echo "Error creating volume: ${volume_name}"
         return 1
      fi
      [ "${auto_partition}" = "true" ] && map_part_mapper -a "${PARTMAPPER_DIR}/${volume_name}"
   elif [ "${mode}" = "delete" ] && [ "${dm_raidrule}" = "false" ];then
      # Skip if not already mapped
      dmvolume_exists "${volume_name}" || continue
      [ "${verb}" -gt "0" ] && echo "${PROG}: removing volume ${volume_name}:${volume_parms}"
      [ "${auto_partition}" = "true" ] && map_part_mapper -d "${PARTMAPPER_DIR}/${volume_name}"
      "${DMSETUP_CMD}" remove "${volume_name}"
   elif [ "${mode}" = "list" ]; then
      local x=""
      [ "${auto_partition}" = "true" ] && x="p:"
      echo "dmsetup:${x} ${volume_name}: ${volume_parms}"
   fi
}

usage() {
   echo "${PROG} v. ${VERSION}
   ${PROG} activate a device mapper entry within the table ${DMTAB}
usage : ${PROG} [-a|-d|-l] [-v] wholedisk
        -a add device devmappings
        -d del device devmappings
        -l list device devmappings that would be added by -a
        -v verbose"
   exit 1
}

map_part_mapper() {
   [ "${verb}" -gt "0" ] && echo "${PROG}: calling ${PM_CMD} ${@}"
   "${PM_CMD}" "${@}" || {
   echo "${PROG} there was a problem with ${PM_CMD}"
   return 1
   }
}

get_first_field() {
   local temp
   # Use sed to extract the first ":" seperated field from the line
   temp=`echo ${@} | sed "s/:\(.*\)//"`
   # remove any trailing / leading spaces
   temp1=`echo ${temp} | sed "s/^[ \t]*//;s/[ \t]*$//"`
   retvalue="${temp}"
   return 0
}

#   Return true if volume already exists in DM table
dmvolume_exists() {
   local test1 x line volume=$1
   [ -z "${volume}" ] && return 1
   
   test1=`${DMSETUP_CMD} ls 2>/dev/null | grep "${volume}" 2>/dev/null`
   [ -n "${test1}" ] && return 0
   return 1
}


[ ! -f "${DMTAB}" ] && echo "${PROG}: Error unable to locate ${DMTAB}" && exit

###########
#Parse Args
###########

params=${#}
while [ ${#} -gt 0 ]
do
   a=${1}
   shift
   case "${a}" in

   -a)
      mode="add"
      list_target=${1}
      shift
      ;;
   -d)
      mode="delete"
      list_target=${1}
      shift
      ;;
   -l)
      mode="list"
      list_target=${1}
      shift
      ;;
   -v)
      let $((verb++))
      ;;
   -*)
      echo "${PROG}: Invalid option ${a}" 1>&2
      usage=y
      break
      ;;
   *)
      # Anything else just ignore
      ;;
   esac
done

[ ! -n "${mode}" ] && usage=y
[ ! -n "${list_target}" ] && usage=y

[ "${usage}" ] && usage
[ "${verb}" -gt "0" ] && echo "${PROG}: specified targets are ${list_target}"
map_list_target


finally an init script (this won't be enough if your planning on having your root device on the array)

/etc/init.d/dmraidmapper
Code:

#!/sbin/runscript

depend() {
        before checkfs evms
        need checkroot modules
}

start() {
        ebegin "Initializing software mapped RAID devices"
        /usr/local/bin/dmmap -a all
        eend $? "Error initializing software mapped RAID devices"
}

stop() {
        ebegin "Removing software mapped RAID devices"
        /usr/local/bin/dmmap -d all
        eend $? "Failed to remove software mapped RAID devices."
}



the dmmap script uses the config file to create or remove device nodes and partition nodes
the config file can contain settings for manual mappings from dmsetup or for settings for dmraid

I'm not a gentoo developer or anything so I'm not saying this is the best or standard way of doing things
but this works for me (until something better comes along at least)

EDIT
now seems to work okay under busybox


Last edited by garlicbread on Sun Apr 10, 2005 1:49 pm; edited 2 times in total
Back to top
View user's profile Send private message
flipy
Apprentice
Apprentice


Joined: 15 Jul 2004
Posts: 229

PostPosted: Sun Apr 03, 2005 9:50 pm    Post subject: Reply with quote

garlicbread, if you need some testing let me know. I've a amd64 with via raid set up (fake-raid dualboot).
_________________
Si no entiendes algo leete detenidamente el Handbook.
Back to top
View user's profile Send private message
movrev
Tux's lil' helper
Tux's lil' helper


Joined: 07 Mar 2004
Posts: 114
Location: Berkeley, CA - USA

PostPosted: Sun Apr 03, 2005 10:14 pm    Post subject: Reply with quote

Thanks for your extensive reply garlicbread, but I think I am going to desist from using RAID 1 for now untill all this is at least tested. I really want to start using my new computer which I now have been working on for about a week and if I start testing all these scripts, I will have to wait for several more weeks and still not be as stable as I would want to be.
Back to top
View user's profile Send private message
flipy
Apprentice
Apprentice


Joined: 15 Jul 2004
Posts: 229

PostPosted: Sun Apr 03, 2005 11:02 pm    Post subject: Reply with quote

movrev wrote:
Thanks for your extensive reply garlicbread, but I think I am going to desist from using RAID 1 for now untill all this is at least tested. I really want to start using my new computer which I now have been working on for about a week and if I start testing all these scripts, I will have to wait for several more weeks and still not be as stable as I would want to be.

using RAID with gentoo is easy, just follow any how-to you like and you'll have it. Try dmraid, should detect everything (and genkernel has it).
_________________
Si no entiendes algo leete detenidamente el Handbook.
Back to top
View user's profile Send private message
movrev
Tux's lil' helper
Tux's lil' helper


Joined: 07 Mar 2004
Posts: 114
Location: Berkeley, CA - USA

PostPosted: Sun Apr 03, 2005 11:13 pm    Post subject: Reply with quote

flipy wrote:
using RAID with gentoo is easy, just follow any how-to you like and you'll have it. Try dmraid, should detect everything (and genkernel has it).


Yeah, as I said before, dmraid detected everything, but my issue is using this array as my boot partition. What do you mean when you say that genkernel has it? I am usually not going with genkernel as I configure the kernel myself. Thanks.
Back to top
View user's profile Send private message
garlicbread
Apprentice
Apprentice


Joined: 06 Mar 2004
Posts: 182

PostPosted: Sun Apr 03, 2005 11:15 pm    Post subject: Reply with quote

arrgh just when I though i had this sorted
everything works fine when the system is already booted up with a full version of bash
but getting the scripts to work under a busybox version of sh is just a pain
each time I fix one thing, it throws up something else :(
Back to top
View user's profile Send private message
flipy
Apprentice
Apprentice


Joined: 15 Jul 2004
Posts: 229

PostPosted: Mon Apr 04, 2005 7:28 am    Post subject: Reply with quote

movrev wrote:
flipy wrote:
using RAID with gentoo is easy, just follow any how-to you like and you'll have it. Try dmraid, should detect everything (and genkernel has it).


Yeah, as I said before, dmraid detected everything, but my issue is using this array as my boot partition. What do you mean when you say that genkernel has it? I am usually not going with genkernel as I configure the kernel myself. Thanks.

genkernel dev's had included the dmraid (along with udev and some other stuff). so you just need to run
Code:
genkernel --udev --dmraid all
and everything has to be done.
for the boot partition, I've ran into many problems until I found my issue. I guess you know how to install grub into the boot partition, and after that the comp should boot from it...
reading again your posts...
for what i've seen dmraid works out the box, the only issue is that in the past it just detected 4 partitions, and sometimes get messed up with the sizes (which not seems to be any problem for you).
as garlicbread said, having RAID 1 is just for data consistency, but we're talking about hw issues, so if the power goes down or your sister throw a cup of coffee to your tower, your data will be lost...
in terms of speed, AFAIK, with RAID 0 i've almost the same speed under linux that under windows (have to do some more benchmarks). so i'll guess with RAID 1 will be the same.
if you ever have any troubles and need to rebuild the raid, i think it has to be done under windows or with the bios utility.
for the initrd thing, i've been unable to find out how to do it to have *splash and the initrd at the same time, but you can try gensplash built into the bzImage and the initrd, that should work (mind putting everything into modules).
i hope you don't give up with gentoo64, as it's the best distro to get involved with your system.
_________________
Si no entiendes algo leete detenidamente el Handbook.


Last edited by flipy on Mon Apr 04, 2005 7:41 am; edited 1 time in total
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Documentation, Tips & Tricks All times are GMT
Goto page Previous  1, 2, 3, 4  Next
Page 2 of 4

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum