Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Two disk btrfs RAID10 an option?
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Installing Gentoo
View previous topic :: View next topic  
Author Message
msst
Apprentice
Apprentice


Joined: 07 Jun 2011
Posts: 216

PostPosted: Thu Jul 27, 2017 7:52 pm    Post subject: Two disk btrfs RAID10 an option? Reply with quote

For a new Proliant gen8 I am about to set up a 2 disk btrfs RAID and wondering, whether except the obvious Raid1 also a Raid10 - then build up from 4 partitions two on each drive - might be an even better idea.

The idea behind it is twofold

1) If I ever want to add 2 more disks then a Raid1 has to be fully converted. A Raid 10 setup could be simply expanded and thats it.

2) The btrfs Raid1 implementation seems to basically do linear reads. It is only moderately faster in reading files according to benchmarks and far off Raid0. But in theory a Raid1 should be almost as fast as Raid0 (only in reading), as the data is present on two disks and it can be read in parallel from both. With Raid 10 this behaviour should be forced and the Raid10 benchmarks are faster.

Now I wonder if that is a good idea or whether the raid10 implementation of btrfs is too unintelligent to do this. E.g. btrfs has to ensure that it mirrors on 2 partitions not on the same drive and the opposite for the striping. Will this work as I hope or is Raid10 on two disks / 4 partitions a bad idea?

It would be slightly more complicated to set up, so if it does not have the hoped for benefits, then a Raid1 is better.

Has anyone looked into this?
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 43192
Location: 56N 3W

PostPosted: Thu Jul 27, 2017 9:08 pm    Post subject: Reply with quote

mas-,

It sounds a mess. It would be easy to arrange the data layout so that you had no redundancy at all.

Converting a raid1 to raid10 is easy.
Add the two new drives and make a raid10 with two missing drives.
Copy the data from the raid1 to the degraded raid10.
Take 1 drive out of the raid1 and add it to the raid10
Rinse and repeat with the other drive.
While the last drive is being added to the raid10, your redundancy is compromised but that's what your current validated backups are for.

Do it right ... right from the start.

The write speed will be horrible on rotating rust, all those head movements.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
msst
Apprentice
Apprentice


Joined: 07 Jun 2011
Posts: 216

PostPosted: Sat Jul 29, 2017 7:36 am    Post subject: Reply with quote

Hi Neddy,

I think you're right. It seems to be nowhere documented, how btrfs would exactly treat this 2 disk RAID10. So a bit risky on top of a messy setup.

And looking into the uprade options from RAID1 to RAIDD10 it seems even easier than what you wrote:

Quote:
Copy the data from the raid1 to the degraded raid10.
Take 1 drive out of the raid1 and add it to the raid10
Rinse and repeat with the other drive.
While the last drive is being added to the raid10, your redundancy is compromised but that's what your current validated backups are for.


If I understood it right, I can simply chunk 2 new drives in and do a btrfs device add. This would add them to the RAID1 instantaneous without any copying, as it simply adds new space.

Then a btrfs balance -dconvert=raid10 -mconvert=raid10 run and btrfs will convert it in one run without compromising the redundancy.

So in the end it just a somewhat less-than-optimal-performance RAID1 implementation vs a messy and risky setup. I'll set them up as plain RAID1 and evtl. later convert when I have 4 drives.
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 43192
Location: 56N 3W

PostPosted: Sat Jul 29, 2017 9:25 am    Post subject: Reply with quote

mas-,

I don't know the detail of how btrfs manages the reshape.
It may not compromise the redundancy ... it all depends on the implementation details.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
krinn
Watchman
Watchman


Joined: 02 May 2003
Posts: 7071

PostPosted: Sat Jul 29, 2017 2:24 pm    Post subject: Reply with quote

When i see user asking to use raid1/10 with btrfs message, i read that :)
"hey guys, how can i put a parachute to my cat that is walking in a thin line over a water full of crocodiles?"
Back to top
View user's profile Send private message
msst
Apprentice
Apprentice


Joined: 07 Jun 2011
Posts: 216

PostPosted: Sat Jul 29, 2017 4:36 pm    Post subject: Reply with quote

Quote:
hey guys, how can i put a parachute to my cat that is walking in a thin line over a water full of crocodiles?


Hehe, so I guess you consider btrfs still very buggy. I know its reputation is not so good.

I hesitated also quite a while, but am using btrfs now since about 4 years without any problems. Inclusing RAID1. I guess the reputation was well deserved in the beginning, I am not sure it still is. Ext2 also wasn't flawless, it only became reliable with journalling etc.
And RAID is not a substitute for a backup. That I do know.

And BTW, if there is one animal being able to walk a thin line over an abyss, it is likely a cat. 8-;
Back to top
View user's profile Send private message
Cyker
Veteran
Veteran


Joined: 15 Jun 2006
Posts: 1746

PostPosted: Wed Aug 02, 2017 7:13 pm    Post subject: Reply with quote

IMHO, instead of setting up a complicated partitioning scheme, just use the two disks raw and go RAID1 (If you need to boot off them then you will need to partition them tho' so this might not be an option!)

One of the few features of btrfs that is stable and works really well is that it makes 're-shaping' the array beautifully trivial in ways that would make mdadm and ZFS green with envy.

If you later expand to 4 drives, it is fantastically easy to move from RAID1 to RAID10.

The conversion process is something like (Assuming you are using raw drives and not partitions for btrfs):
1) Plug new drives into system
2) Add drives to existing btrfs array: btrfs device add /dev/newdrive1 /dev/newdrive2 /mountpoint-of-target-btrfs-array

Then, you have 2 options: 3) Keep RAID1 or C) Convert to RAID10
3) btrfs balance start -d -m /mountpoint-of-target-btrfs-array
C) btrfs balance start -dconvert=raid10 -mconvert=raid10 /mountpoint-of-target-btrfs-array
NOTE: This can take a really long time if you're using large disks.

Both will net you the same space but raid10 may be a bit faster for large linear files. For random sub-1GB files the performance probably won't be that different. raid1 is probably safer.

Finally, if you do the conversion, once it completes it is worth running
Code:
btrfs fi df /mountpoint-of-target-btrfs-array

to see if the conversion completed fully or if there are still RAID1 chunks left over, as sometimes bits that are in use don't get converted and you may need to run
Code:
btrfs balance start -dsoft,convert=raid10 -msoft,convert=raid10 /mountpoint-of-target-btrfs-array

a few times until they are!


Disclaimer: These commands might not be accurate as I didn't check, and also they sometimes change the commands in newer btrfs-progs!
Back to top
View user's profile Send private message
Ant P.
Watchman
Watchman


Joined: 18 Apr 2009
Posts: 5761

PostPosted: Thu Aug 03, 2017 12:23 am    Post subject: Reply with quote

Using the internal RAID10 mode of Btrfs is a really bad idea. As its own wiki notes, the filesystem will brick itself if you have a single disk go bad and less than 3 copies.

If you really must do this, use dm-raid and put Btrfs on top as a dumb filesystem only.
Back to top
View user's profile Send private message
Cyker
Veteran
Veteran


Joined: 15 Jun 2006
Posts: 1746

PostPosted: Thu Aug 03, 2017 7:20 am    Post subject: Reply with quote

It sucks that they still haven't fixed that, but it's not as bad as you suggest by 'bricked'; Certainly not the worst btrfs problem to recover from (But it is still pretty bad).

The basic problem is that you can mount the array as degraded *AND* Read-write ONCE - After that it will get locked into read-only mode if you don't fix it and later remount it.

I'm not sure if a firm cause has been found; Last I read, when it is mounted in degraded,rw and a write occurs, the written chunk is 'single' instead of 'raid1', and when you next remount it sees the 'single' chunk and panics, thinking that there are missing 'single' chunks on the missing disk, and puts the array into read-only to prevent data-loss.

It's not as big a problem as e.g. the raid56 code (Which currently acts more like raid0) and there are ways to recover from it - The data there can still be read at least!


If the array is shut down and the faulty device replaced and the array repaired so it's not degraded any more then it won't be a problem; It'd only be an issue if you kept trying to use the array degraded.

If it does go into read-only mode, then yeah it's more of a pain as you'd need to copy everything off and rebuild it to make it work again.



More info here:
https://btrfs.wiki.kernel.org/index.php/Gotchas#raid1_volumes_only_mountable_once_RW_if_degraded
Back to top
View user's profile Send private message
Zucca
Veteran
Veteran


Joined: 14 Jun 2007
Posts: 1519
Location: KUUSANKOSKI, Finland

PostPosted: Thu Aug 03, 2017 11:29 am    Post subject: Reply with quote

krinn wrote:
When i see user asking to use raid1/10 with btrfs message, i read that :)
"hey guys, how can i put a parachute to my cat that is walking in a thin line over a water full of crocodiles?"
I have about 4 years of brtrfs-raid1/10 experience in using it, with no problems. This of course with proper amount of disks (5 and 6).
I've even had two "user" related cold-unplug accidents.

It's the raid5/6 modes that aren't safe in btrfs.
_________________
..: Zucca :..

Code:
ERROR: '--failure' is not an option. Aborting...
Back to top
View user's profile Send private message
Maitreya
Guru
Guru


Joined: 11 Jan 2006
Posts: 420

PostPosted: Thu Aug 03, 2017 3:09 pm    Post subject: Reply with quote

Why do I see people constantly repeat the same FUD over and over again. Have been using several btrfs raid modes with different quality disks for and sizes for years. Never has it failed me. NEVER. As with any system, don't ignore system messages. I would actually say the opposite. Btrfs has saved my data countless of times. whether user fault or disks dying.

Anyway, ontopic: there is no added value of raid10 on 2 disks. At best more headaches. So please just start with raid1 and add disks later?


Last edited by Maitreya on Sun Aug 06, 2017 11:46 am; edited 1 time in total
Back to top
View user's profile Send private message
bunder
Bodhisattva
Bodhisattva


Joined: 10 Apr 2004
Posts: 5845

PostPosted: Fri Aug 04, 2017 7:32 am    Post subject: Reply with quote

Maitreya wrote:
Why do I see people constantly repeat the same FUD over and over again. Have been using several btrfs raid modes with different quality disks for and sizes for years. Never has it failed me. NEVER.


because their wiki clearly states that raid1/10 will be unusable (but readable) if the pool is reduced to one disk, and raid5/6 will most likely trash pools if a disk is replaced. here

rhel is also dropping their tech preview of btrfs in their releases. here
Back to top
View user's profile Send private message
Zucca
Veteran
Veteran


Joined: 14 Jun 2007
Posts: 1519
Location: KUUSANKOSKI, Finland

PostPosted: Fri Aug 04, 2017 8:07 am    Post subject: Reply with quote

Maitreya wrote:
Anyway, ontopic: there is no added value of raid10 on 2 disks. At best more headaches. So please just start with raid10 and add disks later?
I'd start with raid1. Then later when there's more disks (and no fear of read-only fs state) then use btrfs balance to change to raid10.
_________________
..: Zucca :..

Code:
ERROR: '--failure' is not an option. Aborting...
Back to top
View user's profile Send private message
Maitreya
Guru
Guru


Joined: 11 Jan 2006
Posts: 420

PostPosted: Sun Aug 06, 2017 11:50 am    Post subject: Reply with quote

Corrected typo. Indeed raid1.

As for the raid5/6 I guess I have been incredibly lucky then all of those times I replaced disks...

I do believe there is a difference between tech people building something saying 'dont use it in production because it will eat kittens' and a company that wants to sell tech saying 'our stuff is the best' I've seen zfs die spectacularly.
Back to top
View user's profile Send private message
msst
Apprentice
Apprentice


Joined: 07 Jun 2011
Posts: 216

PostPosted: Thu Aug 10, 2017 5:34 pm    Post subject: Reply with quote

Quote:
IMHO, instead of setting up a complicated partitioning scheme, just use the two disks raw and go RAID1 (If you need to boot off them then you will need to partition them tho' so this might not be an option!)


Exactly what I did now. I use a separate SSD as root, so the RAID1 is only the data array and I just cryptsetuped and raided the whole raw disks. It is indeed incredibly easy to set up and maintain this way. That beats mdadm and zfs hands down.

With disks getting bigger and bigger I also have a feeling that Raid5/6 is turning into a freak show sooner or later. 16 TB disks announced for 2018. Imagine repairing a degraded Raid 5/6 made out of these monsters. Would likely take a month to repair with a huge chance of encountering a read error in the process. So I guess if btrfs gets its Raid10 gotchas solved it will be as good as it has to get for depreciating ext4.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Installing Gentoo All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum