Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
btrfs RAID10 size issues
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware
View previous topic :: View next topic  
Author Message
Kompi
Apprentice
Apprentice


Joined: 05 Oct 2002
Posts: 252
Location: Germany

PostPosted: Sun Aug 24, 2014 12:18 pm    Post subject: btrfs RAID10 size issues Reply with quote

After having used md raids with LVMs for a long time, I finally tried to switch to btrfs's internal raid functionality. I created some RAID10 filesystems, each over 4 partitions on different drives.

But one of the filesystems has some weird space issues. The filesystem is a btrfs RAID10 on 4 partitions that are each roughly 60GB in size. A RAID10 should stripe over two of those each and mirror them. This should give me a mirror of a with around 120GB in size.

df is showing a filesystem of total size 240GB. This is expected, as df is supposed to show the total size of all volumes combined, regardless of mirroring:

Code:
$ df -h /srv
/dev/sda3  239G     93G  146G   39% /srv


btrfs filesystem show also is no surprise: Each volume ist about 60GB in size, ĺess than half is currently used of any of those:

Code:
$ btrfs filesystem show /srv
Label: 'data'  uuid: a46842a0-c079-46bb-880c-9fa27c13862a
   Total devices 4 FS bytes used 46.15GiB
   devid    1 size 59.60GiB used 23.53GiB path /dev/sda3
   devid    2 size 59.60GiB used 23.53GiB path /dev/sdb3
   devid    3 size 59.60GiB used 23.53GiB path /dev/sdc3
   devid    4 size 59.60GiB used 23.53GiB path /dev/sdd3


However, even though less than half of the fs is used, it appears already to be full.

btrfs filesystem df seems to have the explanation why:

Quote:
$ btrfs filesystem df /srv
Data, RAID10: total=46.00GiB, used=45.61GiB
System, RAID10: total=64.00MiB, used=16.00KiB
Metadata, RAID10: total=1.00GiB, used=551.30MiB


As you can see, btrfs has allocated only 46GB of the ~120GB of mirrored space available for the data blocks. Even if you account for the overhead of metadata that is way too small.

I tried btrfs filesystem resize max and btrfs balance on the filesystem, but that did not change the data block size.

Has anyone run into this problem? Am I missing something? Is there a way to tell btrfs directly to increase the data block area?
Back to top
View user's profile Send private message
vaxbrat
l33t
l33t


Joined: 05 Oct 2005
Posts: 731
Location: DC Burbs

PostPosted: Sun Aug 24, 2014 6:54 pm    Post subject: Reply with quote

You might want to look at the btrfs FAQ where it addresses the free space issues:

https://btrfs.wiki.kernel.org/index.php/FAQ#Aaargh.21_My_filesystem_is_full.2C_and_I.27ve_put_almost_nothing_into_it.21

The short answer is, don't worry about it :D

The "btrfs fi show" command shows you how much space of the partition has been allocated to btrfs for the entire history of the filesystem, sort of a high water mark. "btrfs fi df" shows the current allocation of that allocated space to each pool. For example /thufirraid is one of my 16tb raid5 arrays in my Ceph cluster:

Code:
thufir ceph # df -h
Filesystem                      Size  Used Avail Use% Mounted on
/dev/sda2                       220G  113G   96G  55% /
devtmpfs                         16G     0   16G   0% /dev
tmpfs                           3.2G  1.3M  3.2G   1% /run
shm                              16G   76K   16G   1% /dev/shm
cgroup_root                      10M     0   10M   0% /sys/fs/cgroup
/dev/sdb                         15T  4.1T  8.6T  33% /thufirraid
/dev/sdb                         15T  4.1T  8.6T  33% /raid
/dev/sdb                         15T  4.1T  8.6T  33% /var/lib/ceph/osd/ceph-0
192.168.2.5,192.168.2.6:/        51T   28T   24T  54% /kroll


The /raid and /var/lib/ceph/osd/ceph-0 object store are both subvolumes of /thufirraid. /kroll is where the ceph metadata server is mounting the posix filesystem view of the cluster. At one point /thufirraid was about 75% capacity but that fell as I migrated files out of its /raid subvolume to the /kroll filesystem. Until I gained a trust of Ceph, I was only copying and not moving files over. I later went back and deleted things once I felt all warm and fuzzy about ceph's storage.

Code:
thufir ceph # btrfs fi show /thufirraid
Label: THUFIRRAID  uuid: c2c0bc6d-dfc1-4437-9736-1d24b4874ade
        Total devices 4 FS bytes used 4.06TiB
        devid    1 size 3.64TiB used 1.94TiB path /dev/sdb
        devid    2 size 3.64TiB used 1.94TiB path /dev/sdc
        devid    3 size 3.64TiB used 1.94TiB path /dev/sdd
        devid    4 size 3.64TiB used 1.94TiB path /dev/sde

Btrfs v3.12


The high water mark on the raid was close to 8tb. However my pools are only sitting at about 6tb now:

Code:
thufir ceph # btrfs fi df /raid
Data, single: total=8.00MiB, used=0.00
Data, RAID5: total=5.79TiB, used=4.05TiB
System, RAID1: total=8.00MiB, used=432.00KiB
System, single: total=4.00MiB, used=0.00
Metadata, RAID1: total=14.00GiB, used=11.27GiB
Metadata, single: total=8.00MiB, used=0.00


That 1.7tb or so in the Data pool is part of a free space cache and explains the difference between what "df -h" shows and what we see here. If I start filling things up again, that free space cache will be used first and then the Data pool will be grown as necessary when that gets exhausted. I don't recall whether a "btrfs balance" will tweak things back down when there's a large amount of free space cache.

I have one additional source of confusion about the amount of space left with the ceph filesystem which shows an aggregation of 4 btrfs raids in this case: 2 16tb raid5 arrays, a 12 tb raid5 array and an 8tb raid1 array.
Back to top
View user's profile Send private message
s4e8
Guru
Guru


Joined: 29 Jul 2006
Posts: 309

PostPosted: Mon Aug 25, 2014 2:33 am    Post subject: Reply with quote

You can think of the btrfs as a LVM: 4 partitions --> 4 PV, (System/Metadata/Data) --> 3 LV.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum