Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
RAID1 on Intel (AHCI, ESRT2, RST)
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Installing Gentoo
View previous topic :: View next topic  
Author Message
lutel
Tux's lil' helper
Tux's lil' helper


Joined: 19 Oct 2003
Posts: 93
Location: Pomroczna

PostPosted: Sat Jun 14, 2014 10:02 am    Post subject: RAID1 on Intel (AHCI, ESRT2, RST) Reply with quote

Hi,

Please advice, which sofware RAID should I choose in BIOS? I've got Intel server platform SP1200RPO where I can choose between AHCI, ESRT2(LSI) and RST for disk mgmt. AFAIK ESRT2 is sotware RAID, so in all of these options i'll have to use mdadm, but which one is best in terms of reliability and ease of management (I guess performance-wise it is all the same).

Update:
To give it a try, i've choosen ESRT2, after RAID BIOS configuration into RAID1 + 1 spare, I see it is recognized by Gentoo as:

Code:

[b]ivecd ~ # mdadm --detail /dev/md126[/b]
/dev/md126:
      Container : /dev/md/ddf0, member 0
     Raid Level : raid1
     Array Size : 1952147456 (1861.71 GiB 1999.00 GB)
  Used Dev Size : 1952147456 (1861.71 GiB 1999.00 GB)
   Raid Devices : 2
  Total Devices : 2

          State : active, resyncing
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

  Resync Status : 14% complete

    Number   Major   Minor   RaidDevice State
       0       8        0        0      active sync   /dev/sda
       1       8       16        1      active sync   /dev/sdb

[b]livecd ~ # cat /proc/mdstat[/b]

Personalities : [raid10] [raid1] [raid6] [raid5] [raid4] [raid0] [linear] [multipath]
md126 : active raid1 sdb[1] sda[0]
      1952147456 blocks super external:/md127/0 [2/2] [UU]
      [==>..................]  resync = 13.6% (265705856/1952147456) finish=182.2min speed=154260K/sec

md127 : inactive sdb[2](S) sda[1](S) sdc[0](S)
      4101384 blocks super external:ddf

unused devices: <none>


Dunno what is md127, and why its not recognizing spare drive? Tried:
# mdadm --add /dev/md/ddf0 /dev/sdc
mdadm: Cannot open /dev/sdc: Device or resource busy

No joy...

Cheers
Back to top
View user's profile Send private message
szatox
Veteran
Veteran


Joined: 27 Aug 2013
Posts: 1746

PostPosted: Sat Jun 14, 2014 2:30 pm    Post subject: Reply with quote

Dont use raid in bios at all, go with either full hardware raid or full software raid. Mdadm assembles raid without any helpers, and fakeraid offered by bios is a compilation disadvantages from both.
md126 and md127 are interface devices related to device mapper. (You refer to them for your block devices, and device mapper does whatever you configured it to do with data you provide -> e.g. mangles it accoring to your raid setup and places on real disks)

So, in your setup:
md126 is raid1 made of sda and sdb,
md127 is inactive something made of sda, asb and sdc

Since you have the same devices in BOTH raids (sda and sdb) you have confilct there. At least one raid is misconfigured and should be removed.

Tip on using mdadm raid: if you want to have your raid in several chunks, partition your drives and make raid of sda1 and sdb1, and another raid of sda2 and sdb2, and perhaps yet another of sda3 and sdb3 and..... Well, actually better make only 2 partitions, tiny one you can use to boot from and split the rest with LVM. You can also consider skipping mdadm raid for "the rest" and have lvm manage software raid. (AFAIK you can't boot from LVM, so you need either separate partition or separate boot device. You can boot from mdadm raid though)
Back to top
View user's profile Send private message
krinn
Watchman
Watchman


Joined: 02 May 2003
Posts: 7071

PostPosted: Sat Jun 14, 2014 3:20 pm    Post subject: Reply with quote

Use software raid and don't use fakeraid.

Here's a compare of solutions : so i could sent anyone asking again there :)

Software :
Good : cross controller handling.
The disks of the array can be attach to many controllers, balancing the controller dependency and IO problem. This also push the number of disk limit to a high number.
Good: Raid implementation
Good : 0 cost
Raid implementation depend on software capability, so it can be easy update or upgrade. Migration of such array to another host is the most versatile solution.
Bad : CPU, memory, bus and controller dependent.
The performance depend on CPU mostly, but also memory that get use a lot by each drives handling, bus as the controller attach to the bus need to handle each drives for one request. Most controller are design to handle 4 drives but not really at the same time, so when using it with more than 1 drive many controller start to produce high IO wait state.
Bad : Ease of use.
Configuration complexity depend on software ease of use. For linux the solution isn't friendly like a menu driven one.

fakeraid :
Good: Ease of use.
The only good thing that solution offer is a menu to handle raid arrays. Depend on hardware, might offer some security or check functionality for the array ; alas in real they offer nothing but just a format functionality or bad sector search (some even just search them without marking them).
Bad : raid implementation.
Raid implementation is fixed and depend on hardware. Some upgrade are possible by raid bios update, in real, it rarely happen.
Bad : hardware dependency.
If hardware fail, raid is lost, except with another compatible hardware, generally the same hardware as previous one, but sometimes same hardware maker. Can be highly problematic specially if the hardware is on the m/b when changing m/b. When on card, might still get problematic if new m/b doesn't support the interface the card use.
Bad : CPU, memory, bus and controller dependent.
Same as software.
Bad : Controller dependent.
Cannot use an array built across many controllers and can only works with the controller they were design for. This time no way to balance controller usage, and put an hard limit to number of disks in the array

hardware raid :
Good : Ease of use.
friendly menu and extensive handling of arrays with battery saving, caching, integrity checks for the array and disks... (depend on hardware).
the array is seen as one disk, and so no specials conditions to use it (no initramfs need to mount root, boot and easy to use in bootloader...)
Good: dedicated cpu, controller and memory (optional, some also can be upgrade, and some also offer using an SSD) that handle each disk and the array. Disks health and the array health itself is handle by the hardware (hiding health state to OS).
Bad : hardware dependency.
Same as fakeraid.
Bad : Controller dependent.
Same as fakeraid, but might also add the prize there, as hardware raid are costly. Limit to number of disks in the array depend on the hardware capability, and cannot be upgrade beyond that limit.
Bad : raid implementation.
Same as fakeraid

As you see, fakeraid share all problems of both hardware and software raid solution.
Back to top
View user's profile Send private message
lutel
Tux's lil' helper
Tux's lil' helper


Joined: 19 Oct 2003
Posts: 93
Location: Pomroczna

PostPosted: Sat Jun 14, 2014 4:18 pm    Post subject: Reply with quote

There is one feature of "fakeraid" that I like better in comparison to software - its the fact that I can see array as one device (/dev/md126 in my case), and not make "arrays" from partitions. LVM might be solution to this, but its another layer of logic thats not really nessecary for me. I might be forced however to go software RAID, although I spend lots of time setting up my Gentoo, at the end it appears that I can't install kernel into my boot partition, either with lilo or grub2...

Could you help me with this?

Code:
livecd ~ # lilo
Using BIOS device code 0x80 for RAID boot blocks
Reading boot sector from /dev/md126
WARNING: SATA partition in the high region (>15):
LILO needs the kernel in one of the first 15 SATA partitions. If
you need support for kernel in SATA partitions of the high region
than try grub2 for this purpose!
Fatal: Sorry, cannot handle device 0x10300

boot = /dev/md126
menu-scheme = Wb
prompt
timeout = 50
delay = 0
append = "panic=10"
default = 3.14.0
image = /boot/kernel-genkernel-x86_64-3.14.5-hardened-r2
  root = /dev/md126p2
  label = 3.14.0
  alias = 1
  append = "real_root=/dev/md126p2 panic=10"
  initrd=/boot/initramfs-genkernel-x86_64-3.14.5-hardened-r2

Grub...
Code:
livecd ~ # grub2-install /dev/md126 --force
Path `/boot/grub' is not readable by GRUB on boot. Installation is impossible. Aborting.

livecd ~ # /usr/sbin/grub2-probe -t fs /boot/grub
/usr/sbin/grub2-probe: error: disk `md126,1' not found.
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 43178
Location: 56N 3W

PostPosted: Sat Jun 14, 2014 4:32 pm    Post subject: Reply with quote

lutel,

Both fakeraid and software raid show you the underlying block devices and the raid set as a single block device.
In both cases, you must remember not to operate on the underlying block devices, unless something breaks.

Grub2 cab boot from any raid set but I've never used grub2
lilo and grub-legacy both need /boot to be raid1 with version 0.90 super blocks.
They both ignore any raid during kernel loading.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
lutel
Tux's lil' helper
Tux's lil' helper


Joined: 19 Oct 2003
Posts: 93
Location: Pomroczna

PostPosted: Sat Jun 14, 2014 4:49 pm    Post subject: Reply with quote

NeddySeagoon - in my case i'm not operating on underlying block devices (/dev/sdx) but on /det/md126 - which was created by mdadm automagicaly.
It uses external metadata, so 0.90 is no go :/ Does it mean neither lilo or grub2 can boot from Intel fakeraid (ESRT2)?
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 43178
Location: 56N 3W

PostPosted: Sat Jun 14, 2014 5:08 pm    Post subject: Reply with quote

lutel,

gub2 may work but I think it needs access to the metadata to read the raid.
As you have external metadata, the raid cannot be read until the metadata is known to whatevet is trying to do the reading.

Bootloaders all make their own arrangements for reading the kernel and so on.
I have never used external metadata, nor grub2.

Put /boot on a USB thumb drive as a get you going measure.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
Jaglover
Watchman
Watchman


Joined: 29 May 2005
Posts: 7089
Location: Saint Amant, Acadiana

PostPosted: Sat Jun 14, 2014 5:14 pm    Post subject: Reply with quote

krinn made a good write up and you still want to go fakeraid? Here is another one: http://skrypuch.com/raid/
_________________
Please learn how to denote units correctly!
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Installing Gentoo All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum