Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Disk performance issues (LVM?): 8MB/s vs 110MB/s
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware
View previous topic :: View next topic  
Author Message
Kobboi
l33t
l33t


Joined: 29 Jul 2005
Posts: 661
Location: Belgium

PostPosted: Tue Jun 28, 2016 7:51 am    Post subject: Disk performance issues (LVM?): 8MB/s vs 110MB/s Reply with quote

I have the following simple partitioning of the only disk in my system:

Code:
Device     Boot     Start       End   Sectors   Size Id Type
/dev/sda1            2048    206847    204800   100M 83 Linux
/dev/sda2          206848 134424575 134217728    64G 82 Linux swap / Solaris
/dev/sda3       134424576 976773167 842348592 401.7G 8e Linux LVM


(OK, the swap partition is a bit large). The LVM is currently setup as

Code:
  ACTIVE            '/dev/vglermy/lvroot' [150.00 GiB] inherit
  ACTIVE            '/dev/vglermy/lvhome' [250.00 GiB] inherit



  • Both logical volumes have been formatted ext3 using default parameters and are mounted as / and /home
  • For purposes of testing, I have turned off the swap partition, formatted it ext3 using default parameters (mke2fs -j) and mounted it as /swap


Code:

/dev/mapper/vglermy-lvroot on / type ext3 (rw,noatime,data=ordered)
/dev/mapper/vglermy-lvhome on /home type ext3 (rw,noatime,data=ordered)
/dev/sda2 on /swap type ext3 (rw,relatime,data=ordered)


tune2fs shows no differences in filesystem parameters between swap, home and root.

Code:
tune2fs 1.43.1 (08-Jun-2016)
Filesystem volume name:   <none>
Last mounted on:          /home
Filesystem UUID:          10104d5b-0a91-4452-b657-3f68c15ff84a
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:     has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
Filesystem flags:         signed_directory_hash
Default mount options:    user_xattr acl
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:     Linux
Inode count:              16384000
Block count:              65536000
Reserved block count:     3276400
Free blocks:              5687581
Free inodes:              11777172
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:     1008
Blocks per group:         32768
Fragments per group:     32768
Inodes per group:         8192
Inode blocks per group:   512
Filesystem created:     Tue Jul  1 15:38:55 2014
Last mount time:          Tue Jun 28 08:41:34 2016
Last write time:          Tue Jun 28 08:41:34 2016
Mount count:              117
Maximum mount count:     -1
Last checked:             Fri Feb 27 13:19:51 2015
Check interval:           0 (<none>)
Lifetime writes:          5622 GB
Reserved blocks uid:     0 (user root)
Reserved blocks gid:     0 (group root)
First inode:              11
Inode size:               256
Required extra isize:     28
Desired extra isize:     28
Journal inode:            8
First orphan inode:     4247212
Default directory hash:   half_md4
Directory Hash Seed:     dd60e802-84b7-476e-8e23-18967dd04405
Journal backup:           inode blocks


I now compare write performance:

Code:
samsonov christophe # { echo "START"; dd if=/dev/zero of=/swap/test.bin bs=1M count=5000 ; sync ; echo "STOP";  } 2>&1 | timestamper
[09:20:44] START
[09:20:52] 5000+0 records in
[09:20:52] 5000+0 records out
[09:20:52] 5242880000 bytes (5.2 GB) copied, 7.25877 s, 722 MB/s
[09:21:31] STOP

samsonov christophe # { echo "START"; dd if=/dev/zero of=/home/test.bin bs=1M count=5000 ; sync ; echo "STOP";  } 2>&1 | timestamper
[09:21:48] START
[09:30:00] 5000+0 records in
[09:30:00] 5000+0 records out
[09:30:00] 5242880000 bytes (5.2 GB) copied, 492.14 s, 10.7 MB/s
[09:31:11] STOP


In the first case, an iotop is showing an actual disk write rate close to what I assume to be the physical capabilities of the disk (~110MB/s). In the second case, an iotop is showing an actual disk write rate around ~8MB/s.

Any ideas? :cry:
Back to top
View user's profile Send private message
guitou
Guru
Guru


Joined: 02 Oct 2003
Posts: 400
Location: France

PostPosted: Tue Jun 28, 2016 8:34 am    Post subject: Reply with quote

Hello.

I'm afraid it has to be due to lvm: writing on a logical volume requires more CPU operations.
In addition, if your drive is not an ssd, "postion" of a partition on disk does affect I/O performances.

++
Gi)
Back to top
View user's profile Send private message
Kobboi
l33t
l33t


Joined: 29 Jul 2005
Posts: 661
Location: Belgium

PostPosted: Tue Jun 28, 2016 9:01 am    Post subject: Reply with quote

guitou wrote:
Hello.

I'm afraid it has to be due to lvm: writing on a logical volume requires more CPU operations.
In addition, if your drive is not an ssd, "postion" of a partition on disk does affect I/O performances.

++
Gi)


Thanks for your reply. Although your remarks are probably valid in general, do you really consider them to explain a factor 12 performance difference?
Back to top
View user's profile Send private message
frostschutz
Advocate
Advocate


Joined: 22 Feb 2005
Posts: 2970
Location: Germany

PostPosted: Tue Jun 28, 2016 9:53 am    Post subject: Reply with quote

pvdisplay/vgdisplay/lvdisplay? (what's the data offset and extent size used by LVM? Any other specialties involved - snapshots, thin volumes, ...?)

Also how full is the filesystem? Filesystems get slow when they get close to 100% utilization...

LVM does not slow down devices normally (it's a simple offset calculation, doesn't matter if you have to do this for LVM offset or partition offset or ... also this is done all the time also when accessing RAM)

Is the disk itself fine, no reallocated sectors, ...?
Back to top
View user's profile Send private message
Kobboi
l33t
l33t


Joined: 29 Jul 2005
Posts: 661
Location: Belgium

PostPosted: Tue Jun 28, 2016 10:54 am    Post subject: Reply with quote

frostschutz wrote:
pvdisplay/vgdisplay/lvdisplay? (what's the data offset and extent size used by LVM? Any other specialties involved - snapshots, thin volumes, ...?)

Also how full is the filesystem? Filesystems get slow when they get close to 100% utilization...

LVM does not slow down devices normally (it's a simple offset calculation, doesn't matter if you have to do this for LVM offset or partition offset or ... also this is done all the time also when accessing RAM)

Is the disk itself fine, no reallocated sectors, ...?


Thanks for your willingness to support. Just to be sure, I did some cleaning up

Code:
/dev/mapper/vglermy-lvhome  246G  188G   46G  81% /home


and ran the test again

Code:
{ sync; echo "START"; sudo dd if=/dev/zero of=/home/test.bin bs=1M count=5000 ; sync ; echo "STOP";  } 2>&1 | timestamper
[12:40:09] START
[12:40:28] 5000+0 records in
[12:40:28] 5000+0 records out
[12:40:28] 5242880000 bytes (5.2 GB) copied, 18.162 s, 289 MB/s
[12:43:07] STOP


Looks like that did at least have some effect (~30MB/s), although I expect 20% free space to be enough?

As for the LVM setup, do you see anything worrying?

Code:
# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda3
  VG Name               vglermy
  PV Size               401.66 GiB / not usable 3.02 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              102825
  Free PE               425
  Allocated PE          102400
  PV UUID               c9DAgx-eGPj-B1Wy-U2Y7-nSfA-kFIw-tDjRvb
   
# vgdisplay
  --- Volume group ---
  VG Name               vglermy
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  8
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               401.66 GiB
  PE Size               4.00 MiB
  Total PE              102825
  Alloc PE / Size       102400 / 400.00 GiB
  Free  PE / Size       425 / 1.66 GiB
  VG UUID               wYWx5B-6yWg-Xl9q-mgVA-HE4S-Pzu7-kf5zCx
   
# lvdisplay
  --- Logical volume ---
  LV Path                /dev/vglermy/lvroot
  LV Name                lvroot
  VG Name                vglermy
  LV UUID                zVOR10-w6C2-C9xC-jVWU-VDmO-QD4B-dIXcl1
  LV Write Access        read/write
  LV Creation host, time sysresccd, 2014-07-01 15:38:22 +0200
  LV Status              available
  # open                 1
  LV Size                150.00 GiB
  Current LE             38400
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
   
  --- Logical volume ---
  LV Path                /dev/vglermy/lvhome
  LV Name                lvhome
  VG Name                vglermy
  LV UUID                08VqVm-vyA6-goYk-UhVW-f5p8-KhmU-bgbVv9
  LV Write Access        read/write
  LV Creation host, time sysresccd, 2014-07-01 15:38:31 +0200
  LV Status              available
  # open                 1
  LV Size                250.00 GiB
  Current LE             64000
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1
Back to top
View user's profile Send private message
frostschutz
Advocate
Advocate


Joined: 22 Feb 2005
Posts: 2970
Location: Germany

PostPosted: Tue Jun 28, 2016 12:28 pm    Post subject: Reply with quote

Looks all normal to me and the speed you showed is orders of magnitute different than before? (492 seconds vs. 18 seconds)

What kind of disk are we talking about anyway?

pvdisplay actually doesn't show offset; `pvs -o +pv_all` says `1st PE` == `1.00m`? (or similar, but not 0.84 or strange odd number)

Apart from that, I have no ideas, sorry
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 43195
Location: 56N 3W

PostPosted: Tue Jun 28, 2016 5:35 pm    Post subject: Reply with quote

Kobboi,

If its a rotating rust (Magnetic) HDD, you will see over a factor of 2 to 3 speed difference between the outside of the platter and the inside.
The is due to there being more sectors per track at the outside that the inside.
The drive is 'zoned'. The sectors per track count changes at each zone boundary.

Your tests
Code:
dd if=/dev/zero of=
are write tests, depending on the filesystem fill factor, much of the time may be spent seeking.
That says nothing about LMV.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum