Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Yet another Hybrid Raid 0 / 1 Howto for 2.6 with dmsetup
View unanswered posts
View posts from last 24 hours

Goto page 1, 2, 3, 4  Next  
Reply to topic    Gentoo Forums Forum Index Documentation, Tips & Tricks
View previous topic :: View next topic  
Author Message
garlicbread
Apprentice
Apprentice


Joined: 06 Mar 2004
Posts: 182

PostPosted: Sat Oct 30, 2004 11:26 pm    Post subject: Yet another Hybrid Raid 0 / 1 Howto for 2.6 with dmsetup Reply with quote

Yet another Hybrid Raid 0 / 1 Howto for 2.6 with dmsetup

I've been experimenting a lot with setting up a Hybrid Raid Array using a couple of 200Gb Maxtor SATA disks via device mapper
and I've found a couple of things that I couldn't find on google / gentoo forums or the dm-devel list
this relates to dmsetup / dmraid and Hybrid Raid arrays (especially with regards to Raid 1 and dmsetup)
So I decided to write this little HowTo
(this is also for when I forget all about this and have to remember how I setup my Raid arrays in the first place :D)

In my own case I've been booting off of a 3rd disk which I'll eventually copy the contents to the Raid Array
although it may also be possible to use the LiveCD as well, but I've not actually tried this myself yet
a large amount of the info here has been pieced together from other messages on the gentoo forum and from experimentation
Please feel free to copy or indicate if any part of this is incorrect
(i.e. I take no responsibility for any loss of data if your PC blows up etc)
also this is my first HowTo and use of BBCode, so apologies if I haven't got the formatting right
(I'm also starting to think that I've written far too much here for one document :))

Table of Contents
1.0 A bit about Raid in general
1.1 Hybrid Raid
1.2 dmsetup
1.3 dmraid
2.0 Determining the size of an individual disk
2.1 Setting up Raid 1
2.2 Setting up Raid 1 – Disk synchronization
2.3 Setting up Raid 1 – Without synchronization
2.4 Setting up Raid 1 – Specific options for the mirror target
3.0 Setting up Raid 0
4.0 Hiding the Raid / Bios metadata
5.0 Mapping out the partitions
6.0 Automating via scripts
7.0 Performance


1.0 A bit about Raid in general

At the moment there are 3 different types of Raid implementation available under Linux

  1. Software Raid - This uses the user space tools within Linux to present a typical /dev/md0 access. this is Raid support native to Linux but not usually compatible with Windows
  2. True Hardware Raid - typically only seen on Servers, or machines with separate Hardware PCI Raid card add ons
  3. There is also another type of raid that is seen on some of the new motherboards. It's not true hardware raid as a lot of the work is still carried out by the CPU and the operating system,
    For the rest of this document I'll refer to this type as Hybrid Raid. This is the type that this HowTo is concerned with

In my case the motherboard is a Asus A8V Deluxe, which means I have 2 different Hybrid Raid controllers to experiment with
a VIA VT8237 and the Fast Track Promise 20378 RAID controller

My end Goal was to get a dual boot system up and running with Win XP and Gentoo Linux
something that could use the hybrid raid setup so that Win XP and Linux would both be raided and could co-exist
Also something that would use the VIA controller in preference, as initial indications using Windows benchmarking tools appear to show that this is a little faster than the Fastrack controller when using both disks in combination.

I've noticed that it is possible to use Linux software raid for Linux, and the Hybrid Raid for Win XP
and both will be compatible (at least with Raid 1). But this is only if the super block option is switched off for software raid
This makes it a pain to setup. Also since device mapper is closer to the kernel I was hoping that it may perform better
or at the very least be easier to implement
For Raid 1 purposes I also used the windows VIA utility after trying different setups with Linux to confirm that the disks were still considered in-sync by the Hybrid Array setup


1.1 Hybrid Raid

Typically with hybrid raid, the setup / initialization is controlled from the bios when you first boot up
With the controllers that I was using (at least in my case the VIA VT8237), typically 1 or 2 bytes is written
near to the partition table as an indicator that the hybrid raid array is there
Also the meta data which describes how the raid array is setup (size of array number of disks / type etc)
appears to be located somewhere right at the end of the disk on each of the raid members
I believe this is what the bios usually reads / writes to when initializing the array at bootup
when you view the disk via DOS / windows / bios etc, it sees the end result of the raid array
which is a disk slightly shorter than normal
(I'd guess this is to prevent the meta data from being overwritten / affected at the end of the disk)

Linux on the other hand doesn't see the disks via the bios
it sees the disks as non-raided entities including the meta data at the end
in my case as an example with the latest kernel 2.6.9-rc2-love4 my SATA drives were showing up as /dev/sda /dev/sdb
as the newer SATA drivers now appear to be a part of the SCSI driver set


1.2 dmsetup

The new 2.6 kernel doesn't currently have support for Hybrid Arrays like 2.4 used to have
For 2.4 there were some drivers that could be used, including a proprietary driver for the VIA system
but for 2.6 this is now moving towards device mapper
Device mapper is a kernel feature that is controlled by a user space program called dmsetup

the way to imagine device mapper / dmsetup
a block device is fed in, it is then manipulated with a resultant block device out
you specify the output block device name and a map file
the map file contains what input block devices there are, the type of target: linear / stripe / mirror etc
with a couple of other parameters
block devices created from dmsetup usually end up within /dev/mapper/

For one example of this
If we imagine /dev/hda as a single hard disk, that has 10000 sectors as an example (0 - 10000)
and /dev/hda1 which is the first partition, starts from sector 63 and ends at sector 5063
accessing sector 0 on hda1 actually accesses sector 63 on hda
also accessing sector 5000 on hda1 actually accesses sector 5063 on hda

in the case of hda and hda1 these are both setup automatically when the kernel first boots up
but hda1 responds in the same way that a linear device map would
using the above example if we'd used dmsetup with the above disk
Code:
echo "0 5000 linear /dev/hda 63" | dmsetup create testpart

we'd end up with a block device /dev/mapper/testpart that would respond exactly the same way as hda1


1.3 dmraid

One tool I've been experimenting with is dmraid
this calls dmsetup with a map to setup Raid arrays automatically by reading the meta data off the disks in use
however it's still in beta testing at the moment
also dmraid doesn't currently recognize the VIA VT8237
for the Fastrack 20378 (dmraid pdc driver) it was able to identify the Raid 0 array I'd setup correctly
However the raid 1 array appeared to be only half the size that it should be (100Gb instead of 200Gb)
this is probably just a bug, but it's something to be aware of (the version used was dmraid-1.0.0-rc4)

homepage is here
http://people.redhat.com/~heinzm/sw/dmraid/

EDIT
I've just found that someone else has made a much better ebuild here
https://bugs.gentoo.org/show_bug.cgi?id=63041

to use with gentoo portage overlay, create the directory and place the ebuild within /usr/local/portage/sys-fs/dmraid/

make sure that PORTDIR_OVERLAY="/usr/local/portage” is set within /etc/make.conf

Code:
cd /usr/local/portage/sys-fs/dmraid/
ebuild dmraid-1.0.0_rc4.ebuild fetch
ebuild dmraid-1.0.0_rc4.ebuild digest
emerge dmraid


2.0 Determining the size of an individual disk

One of the values we may need, to create the map, is the size of one of the individual raid members (single disk)
this can be viewed by a couple of different ways

  1. by looking at /sys/block/<block device>/size, assuming that /dev/sda is one of the raid members
    Code:
    cat /sys/block/sda/size

  2. by using the blockdev command
    Code:
    blockdev --getsize /dev/sda


this will show the length of the disk in sectors
for the length of the disk in bytes this value can be multiplied by 512, (typically 1 sector = 512 bytes)


2.1 Setting up Raid 1

Raid 1 is primarily designed for resilience, in that if one disk fails, there is still another disk with an identical copy of data
It is possible to setup Raid 1 just using Device mapper without Software Raid
what we need to do is to create a single block device to represent the Raid 1 Array of the 2 disks
we feed 2 disks in and get 1 block device out
whatever is written to the output block device it written to both disks at the same time
anything Read could be read from ether disk
the below sections indicate how to put the map together
the map can be stored in a file and then called by
Code:
dmsetup create <map file>

or you can specify it manually on the command line e.g.
Code:
echo "0 398283480 mirror core 2 128 nosync 2 /dev/sda 0 /dev/sdb 0" | dmsetup create testdevice


2.2 Setting up Raid 1 – Disk synchronization

so far the only mention of this table I've seen documented for Raid 1 device mapper is the following
Code:
0 398283480 mirror core 1 128 2 /dev/sda 0 /dev/sdb 0

the only problem with this is that dmsetup will start to try and synchronize the disks and copy sda to sdb
sometimes this is something you want to happen to ensure that both disks have identical data
However for normal boot up this is not suitable
on my own system while windows takes around 1Hr to synchronize the disks, I've worked out it would take around 5Hrs to wait for the disks to synchronize using dmsetup in this way, for a pair of 200Gb disks

Also from what I've observed it would appear that data is not currently written to both disks at the same time while the synchronization is taking place
This is noticeable if synchronization is stopped half way through with dmsetup remove_all

the only parameters you'll probably need to alter (assuming you want your disks to synchronize)

  1. The length of the Array. The figure used above 398283480 is the full length of one of the individual disks in my case, see the above section to get this value
  2. The device nodes /dev/sda and /dev/sdb may be different for your system but represent the block device for each individual disk


also one other thing to remember is that the first disk specified will be the source and the second disk the destination for the synchronization
the full parameter list for a Raid 1 map is listed in a section further below

One other consideration is that this will present the full disk as a Raid array, which means that the meta data that the Bios uses at bootup will also be visible (if this was corrupted then potentially the system could become unbootable)
see the section relating to hiding the metadata to get around this.


2.3 Setting up Raid 1 – Without synchronization

I spent a long time trying to figure this one out. A way for dmsetup to setup a Raid 1 array without the disks synchronizing
which ideally is what's required for normal use / bootup
I finally figured something out by looking at the map that dmraid had created on my FastTrack Array with Raid 1

we can up the number of options to 2 and specify “nosync” as the second option
Code:
0 398283480 mirror core 2 128 nosync 2 /dev/sda 0 /dev/sdb 0

again you'll need to adjust the disk length and device nodes for your own setup
this has the effect of setting up the output device node, but doesn't do the syncing of the whole disk as above, which is what we need for normal operation


2.4 Setting up Raid 1 – Specific options for the mirror target

The mirror target uses the following syntax for the table
<output start> <output length> <target> <log type> <number of options> <... option values> <number of devices> ... <device name> <offset>
the numbers used here are a measurement of the number of sectors on the disk

the first 2 parameters affect the output block device
the rest of the parameters affect the devices going into the map

  1. the first is the offset for the output, this should always be 0
  2. the second parameter represents the length of the output device. typically for Raid 1 this should be the size of a single Raid member
  3. the target parameter in this case is mirror for Raid 1
  4. for log type, the only one supported at the moment is "core" (there is also another one called “disk” from looking at the kernel sources, but I wouldn't try to use this at the moment)
  5. next this is the number of options to feed into the mirror target. this can be 1 or 2 (unless someone is aware of a 3rd option)
  6. the next 1 or 2 parameters can be specified here. first is the region size, (see below for more info on this). next if the number of options is 2, you can specify nosync here as well
  7. next is the number of disks going into the map, in the case of Raid 1 this will always be 2
  8. finally we specify the device block's going in and the offset (typically the offset should always be 0 for raid 1)


For the region size I've found the optimum value appears to be 128 which is what dmraid appears to use by default
I tested this by timing the amount of blocks synchronized using a synchronize map and reading off the number of blocks completed within a minute
using
Code:
dmsetup status /dev/mapper/<block device>

to read off the number of blocks completed
anything smaller than 128 appears to have no affect, while anything larger appears to slow things down


3.0 Setting up Raid 0

Raid 0 has the advantages of using both disks for maximum capacity (2 x 200Gb = 400Gb)
Also depending on the size of the file, (for large files) an increase in the read performance may be obtained.
The disadvantages are that if one disk fails then the whole array is lost
This means that resilience is half that of a single Disk (in other words make sure you have lots of backups)

The data is striped across both disks, e.g. with a 64K stripe size the first 64K is written to the first disk, the second 64K to the second, the third back to the first disk just after the first stripe and so on in an alternating fashion

we can use Device mapper in the same way to setup access to a Raid 0 Array
what we need to do is to create a single block device to represent the Raid 0 Array of the 2 disks
we feed 2 disks in and get 1 block device out

to keep things simple if we use the full size of a single disk e.g. 398283480 sectors (see the above sections on how to obtain this)
now we multiply it by 2 (398283480 * 2 = 796566960)
we'll use this figure as the full size of the Raid 0 array
Code:
0 796566960 striped 2 128 /dev/sda 0 /dev/sdb 0


  1. in this example the 1st parameter should always be 0 as this is the start offset for the resultant output block device
  2. the next parameter represents the full size of the raid array when it is created, this is one that you will need to set based on the size of your own disks in the array
  3. this parameter specifies a striped type of target, which is always required for Raid 0
  4. this parameter specifies the number of disks involved, in most cases it is always 2 disks
  5. this parameter is linked to the stripe size, if for example when creating the array in the Bios menu you've used a stripe size of 64K then this value will be (64 * 2 = 128) or for a 32K stripe (32 * 2 = 64)
  6. finally we specify each disk followed by the offset, the offset is the number of sectors to skip before reading / writing the first stripe on the disk


the order of the source disks will be important for raid 0, usually the first disk picked up by linux will be the one with the first stripe on
e.g. if the raid members are sda and sdb then sda usually comes first
or for hde and hdg, hde would usually have the first stripe
however this might not always be the case and will depend on the raid controller that you are using

in my own case I've used an offset of 0 for both sources disks
But depending on your controller sometimes it is necessary to set the the offset for the 2nd source disk to a value other than 0
e.g. on one thread within the gentoo forum I've seen one person mention that a sector offset of 10 would be required for the second disk for the HPT374 controller

In order to test that dmsetup has mapped the the Raid 0 Array the same way as the bios


  1. create a partition within DOS / windows with a filesystem on
  2. use dmsetup to access the array within Linux
  3. setup the linear maps for the partitions (see below sections for this)
  4. attempt to mount the filesystem


this is about the only way I know of testing that the array is working correctly via dmsetup
dmraid may be a better option if it works with your system
Also if you want to set the overall size of the array to hide the Raid Bios meta data then see the below sections


4.0 Hiding the Raid / Bios meta data

In the above examples for setting up Raid 1 or Raid 0 Arrays, the full span or size of the disk has been used to present the raid array as a block device
one thing to note with this however is that in some cases the bios will store it's information about the raid array at the end of the disk
If a partition has been created on the array within Linux that crosses over into this area, then you risk overwriting this data which could potentially make the system unbootable
i.e. Grub would probably read it's information from the bios which would in turn be unable to read the Array if the meta data has been corrupted

If you've used a windows or DOS utility such as PQMagic to setup your partitions then you probably don't need to worry about this
as these utilities would see the array via the bios which would in turn display the Raid Array as slightly shorter than the physical disk
this way the last partition on the disk won't cross over into this area

If you want to make sure that the resultant device nodes for the array under Linux cannot view the meta data
then we also need to make the Raid Array seem slightly shorter than the full span in order to hide it
this way any partitioning tools used under Linux (such as sfdisk or fdisk) won't be able to see or allocate the space used by the meta data, also any backup / restore utility that may affect the entire disk won't interfere as well

There are 2 ways to do this

  1. if you're raid controller is supported, then use dmraid as it is able to read off the specific values from the meta data and set the correct lengths
  2. do it manually


The Manual method:
unfortunately this relies on using DOS or windows

  1. use a Windows or DOS utility (such as PQMagic) to create a partition located right at the end of the Raid Array
  2. boot into Linux and setup the Raid array using the full span of the disk to begin with
  3. run sfdisk -l -uS to find the end sector (last sector used) for the partition created at the end of the Array, since the partition was created under DOS / windows it won't go right to the end of the disk


For Raid 1
in my case as an example the full size of the disk was 398297088 sectors but the last partition created under PQMagic ended at sector 398283479 on a Raid 1 Array
now add 1 to this value (as the partition needs to sit within the disk) 398283479+1 = 398283480
398283480 is now the value I use for the length of the Raid 1 Array
e.g.
Code:
0 398283480 mirror core 2 128 nosync 2 /dev/sda 0 /dev/sdb 0


For Raid 0
you could just add 1 the same as above but to be sure when I tried this myself
I wanted to make sure that the length of the array was an even number of stripes
this may be over complicating things a bit, but as an example
size of individual disk - 398297088
Full size of Raid 0 Array - (398297088 x 2) = 796594176
end sector of the last partition on the disk 796583024
for 64K stripe size 65536 / 512 = 128 sectors for each stripe on the disk
796583024 / 128 = 6223304.875 stripes
rounding this up to an even whole number = 6223306 stripes
working backwards 6223306 * 128 = 796583168 sectors

which is the value I've used in the raid map for Raid 0
Code:
0 796583168 striped 2 128 /dev/sda 0 /dev/sdb 0

I'm not sure if this is necessary but it's one way of looking at it

Realistically I've found that the Win / DOS tools won't go right to the end of the array when creating the partition, which means the array is probably slightly longer than this value, but since we're only talking about a couple of Mb or so and we want to make sure to hide the Bios Raid meta data, this appears to be a safe value to use. (dmraid is more accurate in this regard assuming it can recognize your setup)

Also something to note is that some hybrid raid array controllers have the option for a Gigabyte Boundary in the bios setup for Raid 1
All this means is that the Bios will shorten the length of the Array to the nearest Gb, that way if a replacement Disk is not exactly the same size as the old one, it will still function in the Array, as long as it is the same length in Gb
This can also have the affect of making Raid 1 Arrays appear shorter than it might otherwise be and will also affect the end sector for the last partition on the disk


5.0 Mapping out the Partitions

while the above maps for Raid 1 and Raid 0 will create device nodes for the entire array within /dev/mapper
we still need to create device nodes for the individual partitions as this is something which isn't done automatically
this is similar to sda1, sda2 for sda or hda1, hda2 for hda etc

the easy way to do this is to just use the partition-mapper script mentioned at the end of this How-to

if we assume that the raid array has been setup as /dev/mapper/raidarray
and that you've already used a partitioning tool to setup the partitions on the disk
we need to use a map with a linear target

first we run
Code:
sfdisk -l -uS /dev/mapper/raidarray


in my case with a test setup I end up with

Code:

   Device Boot    Start       End   #sectors  Id  System
/dev/mapper/raidarray1            63 102414374  102414312   c  W95 FAT32 (LBA)
/dev/mapper/raidarray2     102414375 204828749  102414375   c  W95 FAT32 (LBA)
/dev/mapper/raidarray3     204828750 307243124  102414375   c  W95 FAT32 (LBA)
/dev/mapper/raidarray4     307243125 398283479   91040355   c  W95 FAT32 (LBA)                                         


as an example to create the device node for the first partition
Code:
echo "0 102414312 linear /dev/hda 63" | dmsetup create raidarray1


we've used 2 values here the first 102414312 is the length or the #sectors for the partition
the second value 63 is the offset from the beginning of the raidarray device node taken from the output of sfdisk
assuming the partition has a filesystem on it we can now mount /dev/mapper/raidarray1

If you want to be really clever you could feed the output of the linear map into cryptsetup to encrypt the partition as well
(but there are already other HowTo's for how to do that)


6.0 Automating via Scripts

There are a couple of scripts that I've spotted on another thread which can be useful for setting up raidmaps and partitionmaps
I'm not taking credit for ether of these but I did modify one slightly to be more compatible with sfdisk
(sometimes sfdisk will list partitions for a device node by placing p1 at the end of the name, but for device nodes with long names it will sometimes just add a number without the p)

first I started off by creating a directory called /etc/dmmaps
the first 2 scripts I placed within this directory
while the last one was located within /etc/init.d

dm-mapper.sh script

Code:
#!/bin/sh

SELF=`basename $0`
BASEDIR=`dirname $0`

if [[ $# < 1 || $1 == "--help" ]]
then
        echo usage: $SELF mapping-file
        exit 1;
fi

# setup vars for mapping-file, device-name and devce-path
FNAME=$1
NAME=`basename $FNAME .devmap`
DEV=/dev/mapper/$NAME

# create device using device-mapper
dmsetup create $NAME $FNAME

if [[ ! -b $DEV ]]
then
        echo $SELF: could not map device: $DEV
        exit 1;
fi

# create a linear mapping for each partition
$BASEDIR/partition-mapper.sh $DEV


partition-mapper.sh script

Code:
#!/bin/sh

SELF=`basename $0`

if [[ $# < 1 || $1 == "--help" ]]
then
        echo usage: $SELF map-device
        exit 1;
fi

NAME=$1
if [[ ! -b $NAME ]]
then
        echo $SELF: unable to access device: $NAME
        exit 1;
fi

# create a linear mapping for each partition
sfdisk -l -uS $NAME | awk '/^\// {
        if ( $2 == "*" ) {start = $3;size = $5;}
        else {start = $2;size = $4;}
        if ( size == 0 ) next;
        part = substr($1,length($1)-1);
        ("basename "  $1) | getline dev;


        print 0, size, "linear", base, start | ("dmsetup create " dev); }' base=$NAME


This script I placed within /etc/init.d/
dmraidmapper script

Code:
#!/sbin/runscript

depend() {
        need modules
}

start() {
        ebegin "Initializing software mapped RAID devices"
        /etc/dmmaps/dm-mapper.sh /etc/dmmaps/*.devmap
        eend $? "Error initializing software mapped RAID devices"
}

stop() {
        ebegin "Removing software mapped RAID devices"
        dmsetup remove_all
        eend $? "Failed to remove software mapped RAID devices."
}


also you can place a text file called device.devmap (or what ever you want to call it) within the /etc/dmmaps directory
containing a raidmap
e.g. I have one called via_rd1.devmap that contains
Code:
0 398283480 mirror core 2 128 nosync 2 /dev/sda 0 /dev/sdb 0


by calling
Code:
cd /etc/dmmaps
./dm-mapper.sh via_rd1.devmap

this will setup the array and the partitions within /dev/mapper

dm-mapper.sh will by default automatically call partition-mapper.sh
partition-mapper.sh takes one parameter as input which is the block device of the raid array
e.g.
Code:

partition-mapper.sh /dev/mapper/via_rd1

this will create the device nodes for the individual partitions automatically

starting dmraidmapper as a service manually
Code:
/etc/init.d/dmraidmapper start

or by adding it into your default or boot run levels will make it hunt around for any raid maps called *.devmap within the /etc/dmmaps directory and set them up automatically
although please note that if your root filesystem is on the array you'll probably need to setup a manual initrd that contains these scripts / devmaps and sfdisk to make the root filesystem available for boot


7.0 Performance

One interesting thing I've also been looking into is the performance of the different methods of accessing the disks
to see which is the fastest
zcav is a part of the bonnie++ toolset and reads 100Mb at a time from the block device and outputs the K/s per 100Mb of data
Note - disclaimer these are not precise benchmarks, also for Raid0 I've always used a stripe size of 64K
better results in certain circumstances may be obtained with different stripe sizes
Also measuring the performance in this way, is at a disk level not at a particular software level

using the form
Code:
zcav -c3 /dev/<input device> >output_result.dat

I then plotted the data onto a graph using gnuplot

the steps in the graph appear to represent the different zones on the disk
from the way that I interpret this (I could be wrong)
the steps on the graph appear to indicate that there shouldn't be a bottleneck between the disk and zcav
a flat line may be an indication of a bottle neck of the Raid implementation or SATA raid controller on the motherboard

This gave some very interesting results

  1. Accessing a single disk from the Promise or Via chip set gives the same result - full speed of the disk used as it zones down
  2. when accessing both disks in combination for Raid 0 via Device Mapper, the Promise controller appears to bottle neck at around 95Mb/s as a straight line
    while the via controller appears to use the full capacity of the disks starting at 120Mb/s and slowly zoning down
  3. Raid 1 via Device Mapper follows the performance of a single disk almost identically
    I'm wondering if in the future this may improve, if there is an option for the data to be written to both disks at the same time but is read from different disks in a stripe fashion to improve read performance
  4. Software Raid 1 this appears to follow the zone of the disk as a fuzzy line, just under the performance of a single disk
    (in other words its probably around 3Kb/s slower than using Device mapper which isn't that much of a difference)
  5. For Software Raid 0 compared to Device Mapper Raid 0 there appears to be a large difference (at least for 64K stripe)
    Device Mapper appears to be around 30Mb/s better off at Raid 0 starting at the beginning of the disk
    while software Raid 0 appears to flat line further down the graph


For the win XP results I used diskspeed32 to get the raw data, although this reads 10Mb at a time instead of 100Mb
Also the X axis (disk position) appeared to be to a different scale for diskspeed32 so I had to write a small C program to multiply the X axis by a certain factor to get the graphs to match up, considering that the performance of a single disk under XP and Linux appears to match I believe I've got this right.

I've included a picture of the graph and the raw data if anyone's interested
Graph
Raw Data

for gnuplot I just edited the gp.txt file to include / exclude different results
and used
Code:
load “gp.txt”

within gnuplot to load up / display the graph
Next I'm going to see if I can get grub to work properly, along with setting up an initrd
and to compare the bootup times for Raid 0 / Raid 1 as I'm setting the array up for final use :D


Last edited by garlicbread on Sun Jan 23, 2005 8:56 pm; edited 1 time in total
Back to top
View user's profile Send private message
anderlin
Tux's lil' helper
Tux's lil' helper


Joined: 29 Nov 2003
Posts: 149
Location: Trondheim, Norway

PostPosted: Fri Jan 14, 2005 1:42 pm    Post subject: Reply with quote

Thank you very much for this extensive post!

I have VIA VT8237 and Fast Track Promise 20378 myself, and would very much like to dualboot with raid 0.

I have some difficulties drawing conclusions, however. Maybe your post was too theoretical for me. Therefore I would like to ask some questions:

1. Is it working? Sometime you wrote about a third disk, did you have to use this in the end? Is it necessary for the setup?

2. I didn't understand all of your plots. Is it best to use VIA or PROMISE? I understood I have to do more manually with VIA, but it gives greater performance, right?

3. How about grub? Have you got that working?

Again thank you for this great post!

regards, Anders Båtstrand
Back to top
View user's profile Send private message
anderlin
Tux's lil' helper
Tux's lil' helper


Joined: 29 Nov 2003
Posts: 149
Location: Trondheim, Norway

PostPosted: Mon Jan 17, 2005 12:09 am    Post subject: Reply with quote

I have now figured out everything except grub. Any progress there?

I use 64bit, so I can not use lilo.
Back to top
View user's profile Send private message
garlicbread
Apprentice
Apprentice


Joined: 06 Mar 2004
Posts: 182

PostPosted: Mon Jan 17, 2005 12:52 pm    Post subject: Reply with quote

I wrote the howto for users whereby dmraid wouldn't work and would have to use dmsetup to manually setup the array
VIA is faster than promise at the moment on the Asus board

with earlier versions of dmraid it would work with promise but not with Via
but with the newer version of dmraid this appears to now support the Via chipset as well (5f)
http://people.redhat.com/~heinzm/sw/dmraid/src/
ebuild here
https://bugs.gentoo.org/show_bug.cgi?id=63041

I've got the drives working in that I can copy data to and from them ether in Raid 0 or Raid 1 and are still valid / recognisable using Win Raid as well
Also I was able to get grub to recognise the raid array in the boot menu
But I'm still booting of a temporary 3rd IDE disk at the moment

I need to setup a custom initrd in order to get the thing to boot and document how I did it as well (still haven't got around to it yet)
as the initrd will need to call ether dmsetup or dmraid to setup the array
prior to accessing any of the partitions on the disk

Also I've heard rumours that the 5f version of dmraid had problem mapping out the 6th partition node and beyond, so we still may need to manually map the partitions out during the initrd phase
but at least if it maps the main drive node correctly this solves a whole heap of messing about

For grub a couple of things to note
1. while within linux, grub will see the disks via the Linux OS so will see the disks independently i.e. as separate disks
2. when at the boot menu (before the OS has loaded) grub will see the entire Raid array as a single disk, as it's looking through the Bios instead (something to bear in mind when setting up grub.conf)

The idea is that since grub can read via the array using the Bios at bootup, it should be able to read off the kernel and initrd image files from the disk and into memory

Once the kernel is then loaded into mem it will then call an initial script within the initrd
At this point the kernel cannot see the array only the separate disks (the linux kernel doesn't use the Bios to access Hard Disks) therefore it only has access to the data within the initrd and cannot see the array yet
so the next step for the initrd is to run dmraid or dmsetup (which needs to be included in the initrd archive) to create the device nodes for the Array before booting to the root partition
Back to top
View user's profile Send private message
benoitc
n00b
n00b


Joined: 02 Aug 2003
Posts: 43
Location: Paris (France)

PostPosted: Tue Jan 18, 2005 8:07 am    Post subject: more help with dmsetup Reply with quote

I read the tutorial but can't figurer how to do setup raid 1 exactly.

I have two hd /dev/sda /dev/sdb, each have the same size : 241254720.

Partition table is :

Code:

/dev/sda1 /boot ext2 defaults 0 1
/dev/sda2 swap swap defaults 0 0
/dev/sda3 / ext3 defaults 0 1
/dev/sda4 /home ext3 defaults 0 1


and raid 1 have been created by bios.

So how to use dmsetup to create raid 1 device ? And how to have this device loades each times I boot linux and boot on it ? Any more help would be appreciated, thanks for advance :)
Back to top
View user's profile Send private message
garlicbread
Apprentice
Apprentice


Joined: 06 Mar 2004
Posts: 182

PostPosted: Tue Jan 18, 2005 12:19 pm    Post subject: Reply with quote

If you don't have dual booting with windows to worry about
then it may be easier just to use software Raid 1 using the md driver
as the performance for device mapper compared to software raid 1 is fairly close (although there is a fair difference for Raid 0)

Are you using the Raid support builtin to the motherboard via the Bios?
if so what chipset is it?
have you tried dmraid? (as that's a lot easier to setup)
the above is meant for those situations where dmraid doesn't work

although even if you use dmraid or dmsetup, more than likley you'll need to setup a custom initrd to boot off the array, and that's someting I'm still working on
Back to top
View user's profile Send private message
dalamar
Tux's lil' helper
Tux's lil' helper


Joined: 13 Mar 2004
Posts: 110

PostPosted: Tue Jan 18, 2005 2:46 pm    Post subject: Reply with quote

Maybe only a stupid question ...

Quote:
for 64K stripe size 64000 / 512 = 125 sectors for each stripe on the disk


Why don't ?
for 64K stripe size 65536 / 512 = 128 sectors for each stripe on the disk

Dalamar
Back to top
View user's profile Send private message
anderlin
Tux's lil' helper
Tux's lil' helper


Joined: 29 Nov 2003
Posts: 149
Location: Trondheim, Norway

PostPosted: Tue Jan 18, 2005 7:47 pm    Post subject: Reply with quote

Are some of you trying with 64bit? I start to think my problems is related to that...
Back to top
View user's profile Send private message
anderlin
Tux's lil' helper
Tux's lil' helper


Joined: 29 Nov 2003
Posts: 149
Location: Trondheim, Norway

PostPosted: Tue Jan 18, 2005 9:04 pm    Post subject: Reply with quote

I now have a working initrd, and I boot windows x64 xp and gentoo amd64 from the same raid0 array!

Previously I made the initrd on a 32bit machine, but after some changes I could make it on the 64bit, and now it works. I will post back my settings as soon I get time (later today or tomorrow).

Regards, Anders Båtstrand
Back to top
View user's profile Send private message
garlicbread
Apprentice
Apprentice


Joined: 06 Mar 2004
Posts: 182

PostPosted: Wed Jan 19, 2005 12:07 pm    Post subject: Reply with quote

dalamar wrote:
Maybe only a stupid question ...

Quote:
for 64K stripe size 64000 / 512 = 125 sectors for each stripe on the disk


Why don't ?
for 64K stripe size 65536 / 512 = 128 sectors for each stripe on the disk

Dalamar


That's a good question
I think I originaly got the figure from another thread on the forum
but looking at it now I'm starting to wonder
I think I'll try writing a bit of code to compare the chunks to see how many sectors are being used per stripe chunk
Back to top
View user's profile Send private message
anderlin
Tux's lil' helper
Tux's lil' helper


Joined: 29 Nov 2003
Posts: 149
Location: Trondheim, Norway

PostPosted: Wed Jan 19, 2005 7:07 pm    Post subject: Reply with quote

This is what I did:

I installed Windows on the second partition, and made the other partitions from within Windows. I left some free space at the end, the be sure the raid meta data didn't get overwritten.

Then I booted from a livecd (2004.1 is the only one that works for me), and got the size of the disks with the following:

Code:

# blockdev --getsize /dev/hde
234441648


This is the number of sectors on my disk. Change /dev/hde with your device. Both my disks are the same size. Be aware that the livecd and the installed kernel often gives the same disks different names. With me it was hde and hdg with the livecd, and sda and sdb with the installed kernel.

Then I used dmsetup to map the disk to /dev/mapper/hdd:
Code:

# echo "0 468883296 striped 2 128 /dev/hde 0 /dev/hdg 0" | dmsetup create hdd


Change 468883296 to 2 times the number you got from blockdev. Change 128 according to your arrays stripe size (2 x 64 = 128).

Then I read the partition table from /dev/mapper/hdd:

Code:

# sfdisk -l -uS /dev/mapper/hdd
[...]

   Device Boot    Start       End   #sectors  Id  System
/dev/mapper/hdd1   *        63    208844     208782  83  Linux
/dev/mapper/hdd2        208845  51407999   51199155   7  HPFS/NTFS
/dev/mapper/hdd3      51408000 468873089  417465090   f  W95 Ext'd (LBA)
/dev/mapper/hdd4             0         -          0   0  Empty
/dev/mapper/hdd5      51408063  53464319    2056257  82  Linux swap / Solaris
/dev/mapper/hdd6      53464383  84196664   30732282  83  Linux
/dev/mapper/hdd7      84196728 468824894  384628167  83  Linux


Change the following commands to match your table:

Code:

echo "0 208782 linear /dev/mapper/hdd 63" | dmsetup create boot
echo "0 2056257 linear /dev/mapper/hdd 51408063" | dmsetup create swap
echo "0 30732282 linear /dev/mapper/hdd 53464383" | dmsetup create root
echo "0 384628267 linear /dev/mapper/hdd 84196728" | dmsetup create media
echo "0 51199155 linear /dev/mapper/hdd 208845" | dmsetup create windows

Then you can install Gentoo the usual way:
Code:

# mkreiserfs /dev/mapper/root
# mkfs.ext3 /dev/mapper/boot
# mkswap /dev/mapper/swap
# swapon /dev/mapper/swap

# mount /dev/mapper/root /mnt/gentoo
# mkdir /mnt/gentoo/boot
# mount /dev/mapper/boot /mnt/gentoo/boot
[ ... continue as normal ... ]


Remember to compile into the kernel support for your sata controller, device-mapper, ramdisk, initrd and ext2. Her is my .config

Then install grub:
Code:

# grub --device-map=/dev/null
grub> device (hd0,0) /dev/mapper/boot
grub> device (hd0) /dev/mapper/hdd
grub> root (hd0,0)
grub> setup (hd0,0)
grub> quit

If this doesn't work, try with a more recent version of grub. I had to use sys-boot/grub-0.95.20040823.

Then download the following files, which are modified versions of the ones made by Gerte Hoogewerf:

http://anderlin.dyndns.org/filer/mkinitrd
http://anderlin.dyndns.org/filer/linuxrc

Change the following lines in linuxrc to suite your needs:
Code:

echo "0 468883296 striped 2 128 /dev/sda 0 /dev/sdb 0" | dmsetup create hdd
echo "0 208782 linear /dev/mapper/hdd 63" | dmsetup create boot
echo "0 2056257 linear /dev/mapper/hdd 51408063" | dmsetup create swap
echo "0 30732282 linear /dev/mapper/hdd 53464383" | dmsetup create root
echo "0 384628267 linear /dev/mapper/hdd 84196728" | dmsetup create media
echo "0 51199155 linear /dev/mapper/hdd 208845" | dmsetup create windows

Then install busybox (I used busybox-1.00-r1) and make the initrd:
Code:

# USE="static" emerge busybox
# chmod +x mkinitrd
# ./mkinitrd linuxrc initrd
# cp -v linuxrc initrd /boot/

This is my grub.conf:
Code:

timeout 10
default 0

title GNU/Linux
root (hd0,0)
kernel /kernel-2.6.9-gentoo-r14 root=/dev/ram0 real_root=/dev/mapper/root
init=/linuxrc
initrd /initrd

title Windows
root (hd0,1)
rootnoverify
chainloader +1

Then it worked for me.

(sorry for any bad grammar, and bad layout)
Back to top
View user's profile Send private message
garlicbread
Apprentice
Apprentice


Joined: 06 Mar 2004
Posts: 182

PostPosted: Sun Jan 23, 2005 8:31 pm    Post subject: Reply with quote

I've spotted a couple of things recently

1. I've checked and the number of sectors within a 64K chunk is 128 (not 125)
i.e. a chunk is 65536 bytes long
thinking about it, the option we pass to dmsetup is probably the number of sectors
and there are 2 sectors for each Kb

2. using dmraid (version 5f) in combination with the Via chipset for Raid 0
dmraid appears to always use a chunk size of 32K
so if the chunk size is set to 64K or 16K when creating the array within the Bios, it won't be mapped properly if using dmraid, but should be okay using dmsetup (as your manually specifying the chunk size)
But at least we can use it to gain the sector length of the whole array

3. dmraid is okay mapping the 4 primary partitions
and or an extended partition
but not for mapping partitions within an extended partition (they just don't show up)
Back to top
View user's profile Send private message
Deep-VI
n00b
n00b


Joined: 09 Jan 2005
Posts: 18

PostPosted: Mon Jan 31, 2005 7:57 pm    Post subject: Reply with quote

Wonderful information - I had been trying to get this to work on my own using other resources, but then I found this and similar threads and really made some progress.

I have the Asus IC-7 which uses the ICH5 chipset. I configured the onboard RAID to create RAID-0 across 2 80G drives.

My partition scheme is:
40G WinXP
64M BOOT
2G SWAP
(extended partition begins here)
10G ROOT
40G HOME
XXX unpartitioned for future use

Except for the partition scheme, I followed the steps outlined by anderlin closely. I, like a lot of others, cannot use dmraid because it does not properly map some extended partitions. Using dmsetup manually, I can properly map all partitions on the array.

One thing happened to me that I've seen others complain about. When specifying the root partition inside GRUB, I had to use the geometry command to tell it how big my array was. I used (total array sectors / 255 / 63). After that, it had no complaints and could see all the mapped partitions.

I can successfully boot XP. However, I am fighting an init script issue with Gentoo. The initrd runs through successfully and creates all the needed /dev/mapper devices (verified by modifying the linuxrc script to 'ls' them after creation), but they poof sometime after real_root is mounted and Gentoo starts up. I'm running pure udev, with no /dev tarball.

I saw 1 or 2 other people having the same problem in other threads, but no solutions posted. I got to this point late last night and had to quit to sleep. It's been very very educational - I knew nothing about initrd scripts, dev-mapper, or RAID metadata before this. Now I guess it's time to probe the Gentoo boot scripts!
Back to top
View user's profile Send private message
rone69
n00b
n00b


Joined: 01 Feb 2005
Posts: 1
Location: Faenza (RA) Italy

PostPosted: Tue Feb 01, 2005 11:06 am    Post subject: Reply with quote

Ciao,

it's possible share the same disks in raid0 between OS/linux 64bit & XP (I need winxp for my job) whitout data loss??

exuse for my bad english

thx
Back to top
View user's profile Send private message
garlicbread
Apprentice
Apprentice


Joined: 06 Mar 2004
Posts: 182

PostPosted: Tue Feb 01, 2005 1:08 pm    Post subject: Reply with quote

Quote:
One thing happened to me that I've seen others complain about. When specifying the root partition inside GRUB, I had to use the geometry command to tell it how big my array was. I used (total array sectors / 255 / 63). After that, it had no complaints and could see all the mapped partitions.


That's interesting
I'm guessing I've not seen that before as I usually just edit the grub.conf file directly

Quote:
I can successfully boot XP. However, I am fighting an init script issue with Gentoo. The initrd runs through successfully and creates all the needed /dev/mapper devices (verified by modifying the linuxrc script to 'ls' them after creation), but they poof sometime after real_root is mounted and Gentoo starts up. I'm running pure udev, with no /dev tarball.

I saw 1 or 2 other people having the same problem in other threads, but no solutions posted. I got to this point late last night and had to quit to sleep. It's been very very educational - I knew nothing about initrd scripts, dev-mapper, or RAID metadata before this. Now I guess it's time to probe the Gentoo boot scripts!


I think the default configuration for udev under gentoo doesn't map out the device mapper nodes properly by default within /dev/mapper As there are similar problems getting the EVMS or LVM device nodes (which also use device mapper) to appear in the correct place with pure udev as well at bootup
I'm not sure how your initrd was created, but I think some initrd's get around the problem my just having the device nodes mapped manually using mknod when they were created initially

there's a howto here that relates to EVMS / LVM that might be relevant
https://forums.gentoo.org/viewtopic.php?t=263996&highlight=

in your case if your not using LVM or EVMS then
you may need to use a rule such as
KERNEL="dm-[0-9]*", PROGRAM="/etc/udev/scripts/dmmapper.sh %M %m", NAME="mapper/%c{1}"
within /etc/udev/rules.d/10-local.rules

although you'll need to setup the dmmapper.sh script (see howto)
and install multipath tools for the devmap_name binary

a simpler way without multipath tools or the script is to use
KERNEL="dm-[0-9]*", NAME="mapper/%k"
But all the device names under mapper will be called dm0 through to dm9 (instead of their actual names)
Back to top
View user's profile Send private message
Deep-VI
n00b
n00b


Joined: 09 Jan 2005
Posts: 18

PostPosted: Wed Feb 02, 2005 2:56 am    Post subject: Reply with quote

Very nice, garlic - I had suspected udev in the back of my mind and thank you for nudging me in the right direction. I allowed me to save time and narrow down my search for answers.

Since this might help others:
In /etc/udev/rules/50-udev.rules, I commented out the KERNEL="dm-[0-9]*", NAME="" line in and uncommented the similar line below it that calls /sbin/devmap_name...

Next, I downloaded and installed the latest multipath tools from http://christophe.varoqui.free.fr/multipath-tools/ (this package wouldn't compile without first emerging device-mapper for the libdevmapper library).

The next reboot went without a problem. In /dev, the the device mapper devices are showing up as dm-0 through 5 and in /dev/mapper are the custom devices that my initrd creates (boot, home, swap, etc). Both have the same major and minors.

It was a fun few days of troubleshooting and learning and the boost in speed is definitely worth it!
Back to top
View user's profile Send private message
garlicbread
Apprentice
Apprentice


Joined: 06 Mar 2004
Posts: 182

PostPosted: Wed Feb 02, 2005 9:12 am    Post subject: Reply with quote

There's also an ebuild for the multipath-tools ebuild located here for info
https://bugs.gentoo.org/show_bug.cgi?id=71703
I think this has the correct depends to get it to compile correctly, but you'll have to use portage overlay to use it

I'm eager to get this working with Raid 0 now that I've got a couple of those 74Gb 10K super duper Raptor drives (180Mbps according to HDTach in Win O_o)
Although I'm still getting some freaky results trying to use them with dmraid (although I think I know why). Also I've nearly finished a conversion bash script that should make it easier to create initrd / initramfs archives from an existing directory containing files
Back to top
View user's profile Send private message
anestis
n00b
n00b


Joined: 25 Feb 2004
Posts: 6

PostPosted: Sun Feb 06, 2005 10:23 pm    Post subject: Reply with quote

I downloaded the livecd with the dmraid support from http://tienstra4.flatnet.tudelft.nl/~gerte/gen2dmraid/ but I'm having some problems:

Hello,

I¢m new to linux and I¢m trying to set up gentoo on my 2X120GB Intel ICH5-R system (RAID-0)

I wanted to install kernel 2.6 so I wanted to go with the dmraid instead of the other guides that I found in the gentoo forum for the kernel 2.4

My two RAID-0 drives have several partitions. The first one is my windows my partition (NFTS). I have made a second unformatted partion which I want to install gentoo onto...

So here's what I tried

Code:
livecd root # dmraid –ay
(dmraid worked straight away, I didn¢t had to modprobe any modules?)


Code:
livecd root # mount /dev/mapper/isw_bfbgjhhedb_RAID_Volume1

isw_bfbgjhhedb_RAID_Volume1   isw_bfbgjhhedb_RAID_Volume11
(see here I can see only two entries. However I don¢t understand the numbering system.. Volume1 seems to be the whole raid volume and Volume11 the fist partition?If that's the case why don't I see the rest?)


Code:
livecd root # mount /dev/mapper/isw_bfbgjhhedb_RAID_Volume1 /mnt/data

mount: /dev/mapper/isw_bfbgjhhedb_RAID_Volume1 already mounted or /mnt/data busy

mount: according to mtab, /dev/mapper/isw_bfbgjhhedb_RAID_Volume11 is already mounted on /mnt/data


(tried to mount Volume1 but didn¢t work)

Code:
livecd root # mount /dev/mapper/isw_bfbgjhhedb_RAID_Volume11 /mnt/data

 

livecd root # dir /mnt/data

1             CONFIG.SYS                FlexLM     NTDETECT.COM    System\ Volume\ Information  boot.ini     

hiberfil.sys

AUTOEXEC.BAT  Config.Msi                Games      NtfsUdel.log    UsageTrack.txt               boot.lgb      log.txt

BJPrinter     Default.wallpaper         IO.SYS     Program\ Files  WINDOWS                      bootbak.bat   ntldr

BOOT.BKK      Documents\ and\ Settings  MSDOS.SYS  RECYCLER        boot.bak                     gendel32.exe 

pagefile.sys


(Volume11 mounted ok. It¢s my windows partition)


Code:
livecd root # fdisk -l /dev/mapper/isw_bfbgjhhedb_RAID_Volume1

omitting empty partition (5)

 

Disk /dev/mapper/isw_bfbgjhhedb_RAID_Volume1: 247.0 GB, 247044243456 bytes

255 heads, 63 sectors/track, 30034 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

                                   Device Boot      Start         End      Blocks   Id  System

/dev/mapper/isw_bfbgjhhedb_RAID_Volume1p1   *           1        4716    37881238+   7  HPFS/NTFS

/dev/mapper/isw_bfbgjhhedb_RAID_Volume1p4            4717       30034   203366835    f  W95 Ext'd (LBA)

/dev/mapper/isw_bfbgjhhedb_RAID_Volume1p5            6065       12049    48074481    7  HPFS/NTFS

/dev/mapper/isw_bfbgjhhedb_RAID_Volume1p6           12050       30034   144464481    7  HPFS/NTFS

 

livecd root # mount /dev/mapper/isw_bfbgjhhedb_RAID_Volume1p6 /mnt/test1/

mount: special device /dev/mapper/isw_bfbgjhhedb_RAID_Volume1p6 does not exist
(how on earth do you access for example partition5)





Any help is greatly appreciated! I just need to understand how to format the partion I want to install gentoo onto with reiser4. From that point on I can keep up with the gentoo docs.



Thanks,

Anestis
Back to top
View user's profile Send private message
garlicbread
Apprentice
Apprentice


Joined: 06 Mar 2004
Posts: 182

PostPosted: Tue Feb 08, 2005 10:37 pm    Post subject: Reply with quote

dmraid can setup the main device node that represents the entire disk
and the first 4 primary partitions
but at the moment it has trouble with any partitions located inside an extended partition

so you'll need to map some of the partition nodes manually
the partition-mapper.sh script does this automaticly, but relies on sfdisk (I'm not sure if sfdisk is included on the livecd)
it's probably best to add the -p option to dmraid if using partition-mapper.sh to prevent dmraid from creating any partition nodes at the begining, and to let partition-mapper.sh create them all

to do it manualy instead
fdisk -lu /dev/mapper/isw_bfbgjhhedb_RAID_Volume1
or
sfdisk -l -uS /dev/mapper/isw_bfbgjhhedb_RAID_Volume1

should give a list of partitions for the device using sector boundarys (-u option) which is more accurate
using the start and end sector numbers for each partition
this can be fed into dmsetup using a linear map to create the partition node (eg /dev/mapper/isw_bfbgjhhedb_RAID_Volume1p5)
see anderlin's post above on how to do this
Back to top
View user's profile Send private message
flipy
Apprentice
Apprentice


Joined: 15 Jul 2004
Posts: 229

PostPosted: Mon Feb 14, 2005 8:11 am    Post subject: Reply with quote

Quote:
Then download the following files, which are modified versions of the ones made by Gerte Hoogewerf:

http://anderlin.dyndns.org/filer/mkinitrd
http://anderlin.dyndns.org/filer/linuxrc


I cannot find them... so I'm trying to figure out what to change...
can you post another URL or comment what to modify?
Thanks

edit
thanks again :D
_________________
Si no entiendes algo leete detenidamente el Handbook.
Back to top
View user's profile Send private message
flipy
Apprentice
Apprentice


Joined: 15 Jul 2004
Posts: 229

PostPosted: Tue Feb 22, 2005 3:51 pm    Post subject: Reply with quote

Deep-VI wrote:
Very nice, garlic - I had suspected udev in the back of my mind and thank you for nudging me in the right direction. I allowed me to save time and narrow down my search for answers.

Since this might help others:
In /etc/udev/rules/50-udev.rules, I commented out the KERNEL="dm-[0-9]*", NAME="" line in and uncommented the similar line below it that calls /sbin/devmap_name...

Next, I downloaded and installed the latest multipath tools from http://christophe.varoqui.free.fr/multipath-tools/ (this package wouldn't compile without first emerging device-mapper for the libdevmapper library).

The next reboot went without a problem. In /dev, the the device mapper devices are showing up as dm-0 through 5 and in /dev/mapper are the custom devices that my initrd creates (boot, home, swap, etc). Both have the same major and minors.

It was a fun few days of troubleshooting and learning and the boost in speed is definitely worth it!

did you do anything more in particular? (scripts,devmaps...)

edit
i've installed almost everything, but still it's not working:
something is wrong with dmraidmapper in the depend() clause
whenever i boot, i get something like "cannot check the fs", and it's gives me to enter a shell. if i do mount command, i can see that everything is mounted, but if i do a ls /dev/mapper just control shows up, and /dev/dm-X are mapped (and already mounte though)
i've followed this how-to and the steps that anderlin posted, installed udev, configured /etc/conf.d/rc...
so, my system boots ok until it has to check the root fs, then it crashes with check error
so i'm a little bit lost, trying to figure out what's going wrong...
edit2
it seems that everything is mapped in the /dev directory... but still don't get the point in how to solve that
_________________
Si no entiendes algo leete detenidamente el Handbook.
Back to top
View user's profile Send private message
Ravilj
Apprentice
Apprentice


Joined: 29 Jul 2004
Posts: 164
Location: ziig / #

PostPosted: Thu Feb 24, 2005 4:12 pm    Post subject: Reply with quote

* Erm * made a mistake...
Back to top
View user's profile Send private message
garlicbread
Apprentice
Apprentice


Joined: 06 Mar 2004
Posts: 182

PostPosted: Thu Feb 24, 2005 10:09 pm    Post subject: Reply with quote

flipy wrote:


It was a fun few days of troubleshooting and learning and the boost in speed is definitely worth it!
did you do anything more in particular? (scripts,devmaps...)

edit
i've installed almost everything, but still it's not working:
something is wrong with dmraidmapper in the depend() clause
whenever i boot, i get something like "cannot check the fs", and it's gives me to enter a shell. if i do mount command, i can see that everything is mounted, but if i do a ls /dev/mapper just control shows up, and /dev/dm-X are mapped (and already mounte though)
i've followed this how-to and the steps that anderlin posted, installed udev, configured /etc/conf.d/rc...
so, my system boots ok until it has to check the root fs, then it crashes with check error
so i'm a little bit lost, trying to figure out what's going wrong...
edit2
it seems that everything is mapped in the /dev directory... but still don't get the point in how to solve that


Not sure of your exact setup
By default the udev setup will not map device mapper entrys at all
by enabling the rule within /etc/udev/rules.d/50-udev.rules you'l get basic support of /dev/dm-0 /dev/dm-1 etc
but this makes it impossible to tell which dm entry relates to what (evms / dmraid / dmsetup etc)

to get the entrys to show up the same way as they do with devfs e.g. /dev/mapper/ or /dev/evms/
there's a little more work to do:
https://forums.gentoo.org/viewtopic-t-263996-highlight-.html

Also it's important to remember when using a initrd or initramfs to boot the system the changes or additions made to udev relating to the boot device need to be made in 2 places
first on the root filesystem and second within the initrd / initramfs image

I've recently been messing with initramfs images and evms / udev / device mapper raid etc
Some initrd scripts get around the problem by using mknod to create the device nodes, but I've found a better way by using udevstart, but this means that udev has to be configured properly in the initramfs image as well
I've recently got a bootable initramfs image working with pure udev / evms / run-init (from a klibc ebuild I'm working on)
I've nearly finished writing another howto for all of this, but I need to:
a. finish off a script for creating initrd / initramfs images.
b. finish off the ebuild for klibc
c. I need to test that I can now boot to a device-mapper based raid set
Back to top
View user's profile Send private message
m@cCo
n00b
n00b


Joined: 01 Mar 2005
Posts: 9

PostPosted: Wed Mar 02, 2005 2:25 pm    Post subject: Reply with quote

Good, maybe i should get my raid0 to be seen by linux with this method, very very good :D
But i have a question, well two in fact...

First, is dmsetup available in the live installation cd?

If the answer is yes i could try the totally manual method to set up my disks.
Otherwise here's the second question for you :D

I have dmraid sources in the livecd, but i have to compile them accordingly to my kernel version (i have nothing installed yet).
Unfortunately when i try to compile it in the livecd i get a "cannot create executables" error from gcc.
I tried to edit /etc/make.conf adding the proper (i think) variables for an Athlon64 processor.

CHOST="what the manual says to be here" :D
CFLAGS="-march=athlon64 -pipe -o2"
CXXGFLAGS={$CFLAGS} (i don't remeber the exact sintax, sorry).

Anyway i'm referring to the installation guide so i hope i was right.
I've read that somebody has had the same problem but after having the system installed.
I try to run gcc-config but it doesn't exist.

Could you help me in some way?

Thanks a lot.
Back to top
View user's profile Send private message
garlicbread
Apprentice
Apprentice


Joined: 06 Mar 2004
Posts: 182

PostPosted: Thu Mar 03, 2005 12:57 am    Post subject: Reply with quote

I think (or suspect) that dmsetup does probably exist on the Live CD although I'm not entirely certain
(I know the LVM tools exist and they need device-mapper so I'm guessing that it should be on there)

it sounds like what your trying to do is to compile dmraid while in the Live CD
but I'm not sure that's possible
usually to add stuff into the Live CD enviroment involves manualy creating a custom LiveCD with the tools on
one way is to use catalyst
and another way is to do it manually
https://forums.gentoo.org/viewtopic-t-244837-highlight-livecd+creation.html seems to have more info on this

This is one of those things I've not looked into properly myself yet (but will need to eventually to setup a rescue CD in the event the system becomes unbootable for any reason)


on a side note for anyone with a VIA chipset
I've recently found that the rc5f version of dmraid did not in fact support the Via chipset
what was happening was it was picking up the pdc metadata written from a different controller on the same motherboard (I was swapping the drives around experimenting)
and the Via metadata was being overlayed on top of the pdc metadata which sort of made it work on the Via chipset with dmraid (weird)
this is why my newer 74Gb drives were not being picked up (since I'd never set them up on the pdc controller) :oops:

saying that, there's a new version of dmraid out rc6 which does appear to now support via chipsets :D
https://bugs.gentoo.org/show_bug.cgi?id=63041

Edit
it looks like for Via / Raid 0 dmraid will only use a cluster size of 16K (32 sectors)
but on the plus side it does now appear to map out the extended partitions properly

Another Edit
it looks as if there is still a problem with the Promise Raid 1 array being reported half length


Last edited by garlicbread on Thu Mar 10, 2005 12:09 am; edited 2 times in total
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Documentation, Tips & Tricks All times are GMT
Goto page 1, 2, 3, 4  Next
Page 1 of 4

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum