Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
ext4 stride and stripe-width on top of multiple RAID arrays
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware
View previous topic :: View next topic  
Author Message
eponymous
Tux's lil' helper
Tux's lil' helper


Joined: 02 Feb 2005
Posts: 141

PostPosted: Sat Apr 04, 2015 11:41 am    Post subject: ext4 stride and stripe-width on top of multiple RAID arrays Reply with quote

I have the following setup:

3x4TB RAID-5 (Hardware RAID) (two data disks URE 1 in 10^15)
5x2TB RAID-5 (Hardware RAID) (four data disks URE 1 in 10^15)

= six data disks in total?

These will be encrypted using LUKS and then collected together into a single Volume Group in which I'll create some logical volumes.

So in creating the Volume Group, how will this setup adjust the --physicalextentsize I need to specify (default is 4MiB)? In this case 4MiB might not be a multiple of the stripe-widths of both arrays so will need to change.

Also, when creating an ext4 filesystem that spans over these two arrays, will the stride and stripe-width options still follow these formulae?:

Code:
stride = chunk_size (bytes) / 4096

stripe_width = (number of data disks) * (chunk_size in bytes) / 4096 
stripe_width = 6 * (chunk_size in bytes) / 4096


I'm assuming there are six data disks but I'm not sure this is correct since they're of different sizes in different arrays. i.e. four disks are 2TB and two disks are 4TB.

Does anyone know if I've missed something here?

Note: I used this page to aid me with calculations.

Thanks.


Last edited by eponymous on Sat Apr 04, 2015 12:09 pm; edited 1 time in total
Back to top
View user's profile Send private message
frostschutz
Advocate
Advocate


Joined: 22 Feb 2005
Posts: 2971
Location: Germany

PostPosted: Sat Apr 04, 2015 12:02 pm    Post subject: Re: ext4 stride and stripe-width on top of multiple RAID arr Reply with quote

eponymous wrote:
URE 1 in 10^15


Doesn't really matter...

eponymous wrote:
So in creating the Volume Group, how will this setup adjust the --physicalextentsize I need to specify (default is 4MiB)?


I like using a larger PE size, since I don't need 4MiB resolution, ever, and the LVM tools tend to be faster with larger PE sizes. Set it to the size of the smallest LV you're ever going to have. In my case that's closer to 128MiB. It wastes a bit of space (the last partial PE is unusable) but most people can afford to waste 100MiB on a dozens-of-TiB rig. Also according to the manpage, "The default of 4 MiB leads to a maximum logical volume size of around 256GiB" which is obviously not okay anymore.

As for alignment, there is no issue either way. If you believe there is an issue, you should also change the 1st PE data offset (and the LUKS offset), cause otherwise everything will be off by 1MiB you know? (Not that it's a problem, I find that MiB alignment works well for everything thus far).

eponymous wrote:
Code:
stride = chunk_size (bytes) / 4096
stripe_width = (number of data disks) * (chunk_size in bytes) / 4096


Do you have benchmarks that show a significant difference due to these optimizations? In my experience they are more or less - useless. And in the long term, it's not unusual for a RAID to grow or shrink (if you find you ran out of space and add another disk to the bunch), and at that point these values would be off anyhow.
Back to top
View user's profile Send private message
eponymous
Tux's lil' helper
Tux's lil' helper


Joined: 02 Feb 2005
Posts: 141

PostPosted: Sat Apr 04, 2015 12:13 pm    Post subject: Re: ext4 stride and stripe-width on top of multiple RAID arr Reply with quote

Thanks for your quick repsonse.

frostschutz wrote:

eponymous wrote:
Code:
stride = chunk_size (bytes) / 4096
stripe_width = (number of data disks) * (chunk_size in bytes) / 4096


Do you have benchmarks that show a significant difference due to these optimizations? In my experience they are more or less - useless. And in the long term, it's not unusual for a RAID to grow or shrink (if you find you ran out of space and add another disk to the bunch), and at that point these values would be off anyhow.


I don't, however I'd always assumed they were necessary. However, I can't help but feel that they'd be more useful if I didn't have the LVM layer in-between which surely obfuscates things at the file system layer. It could be argued ext4 won't be able to "see" the physical disks and won't care which backs up your point. You're right that I will be planning to grow and shrink the RAID array, however you can alter the stride and stripe-width on the fly using tune2fs.


Oh I should've mentioned that I'm using LVM2 not LVM1 so according to the manpage the 4MiB PE size restriction of LVs no larger than 256GiB doesn't actually apply in this case.
Back to top
View user's profile Send private message
frostschutz
Advocate
Advocate


Joined: 22 Feb 2005
Posts: 2971
Location: Germany

PostPosted: Sat Apr 04, 2015 12:24 pm    Post subject: Reply with quote

Whoops :lol: (they should move the LVM1 stuff to a different section, no one's used that for ages)

It's still better to pick a larger PE size though

Quote:
If the volume group metadata uses lvm2 format those restrictions do not apply, but having a large number of extents will slow down the tools but have no impact on I/O performance to the logical volume.


If at a later point you find you picked a too large PE size, you can always make it smaller...
Back to top
View user's profile Send private message
eponymous
Tux's lil' helper
Tux's lil' helper


Joined: 02 Feb 2005
Posts: 141

PostPosted: Sat Apr 04, 2015 2:24 pm    Post subject: Reply with quote

frostschutz wrote:
Whoops :lol: (they should move the LVM1 stuff to a different section, no one's used that for ages)

It's still better to pick a larger PE size though

Quote:
If the volume group metadata uses lvm2 format those restrictions do not apply, but having a large number of extents will slow down the tools but have no impact on I/O performance to the logical volume.


If at a later point you find you picked a too large PE size, you can always make it smaller...


Great thanks :)

I should also link other readers to this excellent posting from frostschutz that I also came across just now :)

Software Raid 10 LVM on Mixed Drives Sector Alignment

I think the points made there are applicable here too.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum