Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
How do I inhibit that a large file spoils the page cache?
View unanswered posts
View posts from last 24 hours

Goto page 1, 2  Next  
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware
View previous topic :: View next topic  
Author Message
as.gentoo
Guru
Guru


Joined: 07 Aug 2004
Posts: 319

PostPosted: Mon Aug 06, 2018 9:36 pm    Post subject: How do I inhibit that a large file spoils the page cache? Reply with quote

Hello,

I have a partition containing videos and music. I won't watch the same movie(s) every day! However, watching a movie will (in case the page cache is fully used) kick older data from the page cache, most probably data that is really suited to be kept in RAM.
I know that there is a tool that can force data to be kept in the page cache ( dev-util/vmtouch ), I need the opposite.

In ZFS there are the <primarycache|secondarycache>=<all|none|metadata> mount options but I wouldn't like to create a second zpool solely containing one drive.
FUSE has/had (?) a direct_io flag but I've read that it was deactivated and there is the overhead… I did not find a mount option like that in the ext4, jfs, reiserfs and xfs man pages.

Can / how do I tell the kernel to omit putting files / directory contents into the page cache?

Thanks in advance!


EDIT (2018-08-17): man mount.fuse states that direct_io refers to reading and writing from/to the page cache. Sorry for not pointing that out earlier.
Code:
$> man mount.fuse | grep -v NOTE | grep -A6 direct_io
       direct_io
              This option disables the use of page cache (file content  cache)
              in the kernel for this filesystem. This has several affects:

       1.     Each  read(2)  or write(2) system call will initiate one or more
              read or write operations, data will not be cached in the kernel.



Last edited by as.gentoo on Fri Aug 17, 2018 6:53 am; edited 8 times in total
Back to top
View user's profile Send private message
VinzC
Watchman
Watchman


Joined: 17 Apr 2004
Posts: 5021
Location: Dark side of the mood

PostPosted: Fri Aug 10, 2018 1:27 pm    Post subject: Reply with quote

Just being curious here.

From my (relatively poor however) technical understanding of file systems, I don't expect the page cache to be (at least) directly involved in caching files from the disk. The filesystem cache, however is — did you mean that cache, BTW? It is involved for write caching and you can turn it off with mount -o sync. But I guess you're referring to read-caching, right?

There's also buffering, which is always needed, be it at least to accommodate for disk latencies, and I doubt you could safely disable it without noticeable performance issues. From my understanding of I/O, the Linux kernel will determine how much memory it needs and what pages it needs to evict, possibly involving the swap file, according to what memory is used. Memory usage is adjusted continuously with the system's load, usage, aso. And if buffering requires the page cache, well, then let it be.

Next I don't expect huge files to be cached in large chunks. Megabytes maybe but gigabytes, probably not. Instead you can expect caching in small portions, usually 4-KB pages. And even then the filesystem cache is cleared by the kernel at given checkpoints, (correct me if I'm wrong) depending on the I/O scheduler, which you've selected in your kernel configuration, and the mount options you've selected.

I may be totally wrong, though so please educate me if so ;-) .

I'd like to ask: what makes you believe the page cache is involved at all and why do you believe that is a problem?
_________________
Gentoo addict: tomorrow I quit, I promise!... Just one more emerge...
1739!
Back to top
View user's profile Send private message
as.gentoo
Guru
Guru


Joined: 07 Aug 2004
Posts: 319

PostPosted: Mon Aug 13, 2018 9:12 pm    Post subject: Reply with quote

Hello, thanks for replying :)

https://en.wikipedia.org/wiki/Page_cache reads:
Quote:
In computing, a page cache, sometimes also called disk cache,[2] is a transparent cache for the pages originating from a secondary storage device such as a hard disk drive (HDD). The operating system keeps a page cache in otherwise unused portions of the main memory (RAM), resulting in quicker access to the contents of cached pages and overall performance improvements. A page cache is implemented in kernels with the paging memory management, and is mostly transparent to applications.

Quote:
But I guess you're referring to read-caching, right?
correct

BTW: Filesystems are on the same layer as the page cache but separate entities: https://upload.wikimedia.org/wikipedia/commons/3/30/IO_stack_of_the_Linux_kernel.svg - so which standard FS do implement read caching as an "own" feature (apart from ZFS)?
I know there's CONFIG_FSCACHE from the kernel .config but haven't seen it activated by default or by hand so far.
The description of "CONFIG_BCACHE" is
Quote:
Allows a block device to be used as cache for other devices; uses a btree for indexing and the layout is optimized for SSDs.
As I understand that's another cup of tea and maybe not what you ment?!

Quote:
Next I don't expect huge files to be cached in large chunks. Megabytes maybe but gigabytes, probably not. Instead you can expect caching in small portions, usually 4-KB pages.
Are you sure? Here's what I see:
Code:
#> free -h | grep -Ei 'buff/cache|mem'
              total        used        free      shared  buff/cache   available
Mem:            62G        2.5G         49G        337M         10G         59G
#> tar --create --file /mnt/rdisk/test.tar /usr/src
tar: Removing leading `/' from member names
#> free -h | grep -Ei 'buff/cache|mem'
              total        used        free      shared  buff/cache   available
Mem:            62G        2.4G         44G        2.4G         15G         57G
The cache grew approximately 5GB.
For me it's not so important how large the chunks in memory are but to prevent a huge file drive away all the cache-worthy files from the cache.
Quote:
And even then the filesystem cache is cleared by the kernel at given checkpoints, (correct me if I'm wrong) depending on the I/O scheduler, which you've selected in your kernel configuration, and the mount options you've selected.
Checkpoints? Clear the read cache in intervals? What would be the sense in that? AFAIK the cache holds data until it's overwritten or the box is powered down (or you instruct the kernel to drop caches like 'echo 3 > /proc/sys/vm/drop_caches'). There are kind of checkpoints etc. but I think that refers to a write cache which is implemented by the file systems.

Quote:
I'd like to ask: what makes you believe the page cache is involved at all and why do you believe that is a problem?
The page cache is involved because as what I read this far does support that.
Code:
$> man free
[…]
buffers
     Memory used by kernel buffers (Buffers in /proc/meminfo)
cache
     Memory used by the page cache and slabs (Cached and Slab in /proc/meminfo)

As above written I do not want to make big files like archives or videos kick data - that is requested often - out of the page cache. So ideally I could tell the kernel to not cache such files. vmtouch can set an upper size limit regarding cached files:
Code:
vmtouch [OPTIONS] ... FILES OR DIRECTORIES ...
[...]
      -m <max file size>
           Maximum file size to map into virtual memory. Files that are larger than this will be skipped. Examples: 4096, 4k, 100M, 1.5G. The default is 500M.
However, this only refers to files that are given at execution time so all new files will be ignored. As well files can grow beyond that size after execution of vmtouch. Okay, a cron script may help here, still if it runs every 10 minutes there is plenty of time to fill the cache with undesired data.
There is a demon mode for vmtouch but it only refers to locking files in the page cache, not to prevent storing them.

I doubt that I can educate much here.
Since there is only your reply in a long time I guess there is not much I can do about this "issue". I am planning a new system and I want to make it fast. So reading data from ram is magnitudes faster than reading these from HDD or SSD.

I might be wrong about some things I wrote.
Back to top
View user's profile Send private message
Ant P.
Watchman
Watchman


Joined: 18 Apr 2009
Posts: 5917

PostPosted: Mon Aug 13, 2018 11:36 pm    Post subject: Reply with quote

I can't remember where I read this, but iirc the page cache is supposed to recognise sequential reads of large files and evict them early for exactly this reason. I can't find any tunables related to this in /proc/ or /sys/ though and I'm wondering if it really does this.
Back to top
View user's profile Send private message
as.gentoo
Guru
Guru


Joined: 07 Aug 2004
Posts: 319

PostPosted: Tue Aug 14, 2018 9:56 pm    Post subject: Reply with quote

I think your information is related to the device-mapper cache described here: /usr/src/linux/Documentation/device-mapper/cache-policies.txt
Quote:
Message and constructor argument pairs are:
'sequential_threshold <#nr_sequential_ios>'
[…]
The sequential threshold indicates the number of contiguous I/Os
required before a stream is treated as sequential. Once a stream is
considered sequential it will bypass the cache.
The random threshold
is the number of intervening non-contiguous I/Os that must be seen
before the stream is treated as random again.
[…]
Examples
========
[…]
dmsetup message <mapped device> 0 sequential_threshold 1024


By the way - in this concern - is reading a fragmented file still sequential or not?
Back to top
View user's profile Send private message
Ant P.
Watchman
Watchman


Joined: 18 Apr 2009
Posts: 5917

PostPosted: Wed Aug 15, 2018 11:36 am    Post subject: Reply with quote

I'm pretty sure it doesn't care about the layout on disk in that case, only whether the read() calls increment monotonically. There'd probably be a lot of complaints about non-deterministic performance if it were the opposite.
Back to top
View user's profile Send private message
VinzC
Watchman
Watchman


Joined: 17 Apr 2004
Posts: 5021
Location: Dark side of the mood

PostPosted: Thu Aug 16, 2018 5:16 am    Post subject: Reply with quote

as.gentoo wrote:
Hello, thanks for replying :)

You're welcome :-) .

Code:
#> free -h | grep -Ei 'buff/cache|mem'
              total        used        free      shared  buff/cache   available
Mem:            62G        2.5G         49G        337M         10G         59G
#> tar --create --file /mnt/rdisk/test.tar /usr/src
tar: Removing leading `/' from member names
#> free -h | grep -Ei 'buff/cache|mem'
              total        used        free      shared  buff/cache   available
Mem:            62G        2.4G         44G        2.4G         15G         57G

Holy crap 8O ! 62 Gig RAM!? That's something!

as.gentoo wrote:
For me it's not so important how large the chunks in memory are but to prevent a huge file drive away all the cache-worthy files from the cache.

Okay, I didn't understand why you'd stress on that particular topic but now I do. From my understanding, caching is managed in a smart way by the kernel, i.e. it won't provide cache in a detrimental way to the system. And with that much RAM in your system, no surprise the kernel finds enough memory to cache a huge file. In short the kernel is auto-adapting its memory requirements and "consumption" to the context: big memory, big cache.

So I guess it probably won't be a problem even if your files get cached entirely. Remember cache is also beneficial to media seek and media players can seek a lot — I'm not sure reading a video file is a fully sequential operation but I can be wrong. So one question is what proportion of a file would you like to put in cache before it becomes a performance hit, given that you probably don't want to totally eliminate caching/buffering for performance reasons.

And again, if an application stresses the system, the kernel *will* know what pages to evict to give juice to the demanding applications.

EDIT: BTW see your example; the cache grew by 5 Gigs but the available memory shrunk by only 2 Gigs, which proves the point: the kernel doesn't bluntly allocates memory for caching but flushes out what needs to be when appropriate. So by the time an application requires some memory, portions of it, which were used for caching a video file will most probably be recycled for the newcomer.

So my take is you probably wouldn't need to care about that. But that's only my opinion.
_________________
Gentoo addict: tomorrow I quit, I promise!... Just one more emerge...
1739!
Back to top
View user's profile Send private message
as.gentoo
Guru
Guru


Joined: 07 Aug 2004
Posts: 319

PostPosted: Thu Aug 16, 2018 4:43 pm    Post subject: Reply with quote

VinzC wrote:
Code:
#> free -h | grep -Ei 'buff/cache|mem'
              total        used        free      shared  buff/cache   available
Mem:            62G        2.4G         44G        2.4G         15G         57G

Holy crap 8O ! 62 Gig RAM!? That's something!
RAM isn't that expensive anymore. :)
I bought it (apart from other reasons) because zpaq sometimes wants to use more than the 32GB RAM I had before and the OOM killer sometimes killed other running programs. IIRC what was the xserver at least once. I like having the browser running with a lot of (>100) open tabs. As well libreoffice is running with some open documents, there is the tmpfs, the video/audio player and my XDM uses a lot of RAM. I haven't checked the footprint of games running with wine but some have compressed textures/… on disk and some do prefech data as well. From the specs 'Divinity OS:2' likes to use 8GB RAM. And there is sometimes a virtual machine running.

VinzC wrote:
And with that much RAM in your system, no surprise the kernel finds enough memory to cache a huge file. In short the kernel is auto-adapting its memory requirements and "consumption" to the context: big memory, big cache.
Right, but the questions remains: what happens when the page cache is "full". AFAIK old data is thrown out and new data (no matter which) is put in.

VinzC wrote:
given that you probably don't want to totally eliminate caching/buffering for performance reasons.
I totally agree, only large files that are rarely used shall not displace executables and so on.

VinzC wrote:
And again, if an application stresses the system, the kernel *will* know what pages to evict to give juice to the demanding applications.
I really hope so, I have backups (dd or tar w/ compression) as well as virtual images (I use the VMs only rarely) - having a file size up to 769 GB.
From what I read so far the LRU algorithm (Least Recently Used) is used. It "Discards the least recently used items first." @ wikipedia
I read this: https://pthree.org/2012/12/07/zfs-administration-part-iv-the-adjustable-replacement-cache/ section "Traditional Caches" and it made me think that the page cache uses the LRU algorithm. Maybe the algorithm was extended by the kernel devs and does some kind of statistics - then maybe there is a way to influence what is cached and what not (beyond vmtouch). That's what I'm interested in.

VinzC wrote:
EDIT: BTW see your example; the cache grew by 5 Gigs but the available memory shrunk by only 2 Gigs, which proves the point: the kernel doesn't bluntly allocates memory for caching but flushes out what needs to be when appropriate. So by the time an application requires some memory, portions of it, which were used for caching a video file will most probably be recycled for the newcomer.
That may be a coincidence. I tried to 'cat' a 64GB file to /dev/zero, here's what 'free' reported
Code:
$> llh -rw-r--r-- 1 xxx xxx 64G Mar 13 01:02 20180313.dd
$> cat 20180313.dd > /dev/zero
$> free -h | grep Mem
                    total        used        free      shared  buff/cache   available
Mem:            62G        2.1G         57G         46M        2.8G         60G
Mem:            62G        2.1G         41G         46M         19G         60G
Mem:            62G        2.1G         20G         45M         39G         60G
Mem:            62G        2.1G        322M         45M         60G         60G
The first entry was before I started cat and the last right after cat finished.
So the cache grew to it's max. capacity.

EDIT: This either means that the whole content of the page cache was replaced or that the previously cached ~3GB was kept and the read file filled only the rest of those 60GB.

Quote:
So my take is you probably wouldn't need to care about that. But that's only my opinion.
Which I appreciate! However, there are a few discrepancies to what I read some time ago. :?
Back to top
View user's profile Send private message
as.gentoo
Guru
Guru


Joined: 07 Aug 2004
Posts: 319

PostPosted: Thu Aug 16, 2018 7:51 pm    Post subject: Reply with quote

Looks like all previously cached data is replaced. :-\

Code:
# 100% of the 290MB big, last read/written file are in the cache
$> vmtouch -v xxx.tar.xz
[OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO] 74366/74366
           Files: 1
     Directories: 0
  Resident Pages: 74366/74366  290M/290M  100%
         Elapsed: 0.004817 seconds

# request how much of the big, previously read file is still in the page cache -> 57G of 64G =~90%
$> vmtouch -v bigfile.dd
[o oooooOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO] 15076704/16777216
           Files: 1
     Directories: 0
  Resident Pages: 15076704/16777216  57G/64G  89.9%
         Elapsed: 0.55825 seconds

$> free -h
              total        used        free      shared  buff/cache   available
Mem:            62G        2.2G        483M         46M         60G         60G

# read the big file again
$> cat bigfile.dd > /dev/zero

# check how much of the file is in the page cache now -> 58G of 64G =~91%
$> vmtouch -v bigfile.dd
[o oooooooooooOOOOOOoOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO] 15243121/16777216
           Files: 1
     Directories: 0
  Resident Pages: 15243121/16777216  58G/64G  90.9%
         Elapsed: 0.6049 seconds

# check how much of the first file remained in the cache -> 0%
$> vmtouch -v xxx.tar.xz
[                                                            ] 0/74366
           Files: 1
     Directories: 0
  Resident Pages: 0/74366  0/290M  0%
         Elapsed: 0.001364 seconds

$> free -h
              total        used        free      shared  buff/cache   available
Mem:            62G        2.2G        450M         46M         60G         60G

EDIT 1: There might be an error in my reasoning. If so, which is that?

EDIT 2:
Apps can define if a file is cached or not.
Code:
$> sudo dd if=bigfile.dd of=/dev/null iflag=direct  # <--- direct IO, no caching!

$> vmtouch -v bigfile.dd
[                                                            ] 0/16777216
           Files: 1
     Directories: 0
  Resident Pages: 0/16777216  0/64G  0%
         Elapsed: 0.20301 seconds


If apps can tell the kernel not to cache data at least a "wrapper" could rewrite the access method from "cached IO" to "direct (uncached) IO". Not nice, I know.
IMHO It would be nice if the kernel - or filesystem - would allow to define files / (files in) directories as direct-IO and ignore the default setting (which is "simple" read -> the kernel is not told to omit caching so everything is put into the page cache).
If the data is not found in the cache a page fault is issued and the file read from the drive - which is already done that way. Apart from that it's kind of transparent for the apps.
I know that making apps determine "what's done which way" has a reason (e.g. prefetching…) but what speaks against overriding in other cases e.g. when it is known that putting data into the page cache will lead to the removal of repeatedly requested content in the page cache? tar and xz don't care if data is put into the page cache when a directory is compressed - it's highly unlikely that a drive backup is read/uncompressed anytime soon.

EDIT: Some minor edits...
Back to top
View user's profile Send private message
VinzC
Watchman
Watchman


Joined: 17 Apr 2004
Posts: 5021
Location: Dark side of the mood

PostPosted: Sun Aug 19, 2018 10:27 am    Post subject: Reply with quote

as.gentoo wrote:
[...] the questions remains: what happens when the page cache is "full"

same question as : « what if a circular buffer gets full? » sort of. As you've probably found out, data pages are evicted from the cache when it runs short of "disposable" memory or if the cache is "full". It's worth noting that caching is not supposed to trigger swapping — at least I don't think so but I don't know the system technicalities enough in details to be 100% certain. The cache uses what memory remains after allocating RAM for processes, kernel modules and processes aso. I guess if swapping occurs, then caching is minimal, i.e. there's not much left for caching since the system is already running out of physical RAM. But I'm digressing.

As for your other questions, I don't really have an idea as that's where my knowledge ends.
_________________
Gentoo addict: tomorrow I quit, I promise!... Just one more emerge...
1739!
Back to top
View user's profile Send private message
Goverp
l33t
l33t


Joined: 07 Mar 2007
Posts: 706

PostPosted: Mon Aug 20, 2018 8:27 am    Post subject: Reply with quote

As no-one seems to have mentioned it yet, I shall:
swappiness
_________________
Greybeard
Back to top
View user's profile Send private message
VinzC
Watchman
Watchman


Joined: 17 Apr 2004
Posts: 5021
Location: Dark side of the mood

PostPosted: Tue Aug 21, 2018 4:23 am    Post subject: Reply with quote

Goverp wrote:
As no-one seems to have mentioned it yet, I shall:
swappiness

"Why" is not intuitive nor obvious, could you develop, please?
_________________
Gentoo addict: tomorrow I quit, I promise!... Just one more emerge...
1739!
Back to top
View user's profile Send private message
Goverp
l33t
l33t


Joined: 07 Mar 2007
Posts: 706

PostPosted: Tue Aug 21, 2018 10:41 am    Post subject: Reply with quote

Good question. I mentioned swappiness 'cos lots of articles on the web mention decreasing swappiness to "cure" the impact of processing large files on other applications. I was parroting that answer, but it may not be correct.

Swapping certainly has a big impact on a swapped application; you obviously have to wait for it to be swapped in before you get a response. But I was assuming the problem expressed in the original post was poor response time. Actually, rereading that, I'm not sure what the actual problem is, or even if there is one.

It's normal operation for old pages to be kicked from the page cache when something new comes along. Why is that a problem? If there's a response time problem, then OK, there may be an issue with caching a huge file when we know in fact each block of its data will be processed once and once only, so there's no need to cache it at all, but do we know it's actually causing a problem? As mentioned above, the kernel should recognise reading a file sequentially, so its likely only leaving it in cache 'cos either that algorithm's not working here, or there's no pressure to use the other stuff cached. Why keep all the browser tabs memory resident as there's a limit how fast you can switch between them and read the results? Whereas you might pause and backspace a video to see "did that really happen?".

Actually, I do wonder if swappiness is too high. If browser tabs count as swappable units (Chromium sandbox?), having that many open when reading a big file might cause a tab that's not been touched for ages to be swapped out. Or actually almost anything might cause swapping if the tab's not been hit for an hour. And that would cause a noticeable response lag when you next come back to it.
_________________
Greybeard
Back to top
View user's profile Send private message
VinzC
Watchman
Watchman


Joined: 17 Apr 2004
Posts: 5021
Location: Dark side of the mood

PostPosted: Tue Aug 21, 2018 2:30 pm    Post subject: Reply with quote

Goverp wrote:
It's normal operation for old pages to be kicked from the page cache when something new comes along. Why is that a problem?

Good question as well.

Now doubt strikes me...
as.gentoo wrote:
FUSE has/had (?) a direct_io flag but I've read that it was deactivated and there is the overhead… I did not find a mount option like that in the ext4, jfs, reiserfs and xfs man pages.

Do you (as.gentoo) use FUSE? I mean is all your question related to FUSE from the start?

If yes, then there's a good reason to see some lagging. And it's not due to caching at all.

But first off FUSE is only a workaround — I know it's old but Linus Torvalds himself tends to consider FUSE is only a toy. Well, that's his own opinion. But FUSE is still FUSE, right?

From what I understand FUSE comes not without drawbacks, the first of which is context-switching intensive due to disk I/O. That has a negative impact on performance. And seeing directio featured in FUSE makes me think it's more of a hack than anything else. Let's keep in mind FUSE is there mostly when there's no kernel module for the "protocol" (to be taken in the widest sense) in question. And that's true for ZFS — you know, licensing... *facepalm*

So with FUSE impeding performance by design, it's only logical to see some caching featured to compensate for the performance hit. However caching with FUSE is yet another workaround for a workaround itself. And disabling caching with FYSE is but another hack... on a hack on a hack... See how it's getting dirty?

In short: don't FUSE where a kernel-native module is available. Now I guess, if that's the case, you're using FUSE because that's the only option left. I'd still keep away from FUSE for production-type or disk-intensive operations if it were up to me. For I never saw buffering/caching become an issue on the restricted systems I ever used. But that might also be just me, I agree.

And sorry for the noise if I'm well off-topic...
_________________
Gentoo addict: tomorrow I quit, I promise!... Just one more emerge...
1739!
Back to top
View user's profile Send private message
1clue
Advocate
Advocate


Joined: 05 Feb 2006
Posts: 2562

PostPosted: Tue Aug 21, 2018 4:43 pm    Post subject: Reply with quote

What we really need is for browsers to ask the kernel to inhibit caching by mime type.

That of course would apply to any other app that uses video files, but doing this at the kernel level entirely would require some number of bytes to be cached, where a known data stream which should not be cached would be easily recognizable by the user app.

Yes I understand that would mean that all the user apps which deal with these big files would have to get on board, and it would be like herding cats.
Back to top
View user's profile Send private message
VinzC
Watchman
Watchman


Joined: 17 Apr 2004
Posts: 5021
Location: Dark side of the mood

PostPosted: Tue Aug 21, 2018 5:47 pm    Post subject: Reply with quote

1clue wrote:
What we really need is for browsers to ask the kernel to inhibit caching by mime type.

Hmm... that's unlikely to happen. MIME type is an operating system concept, which the kernel [fortunately] is unaware/agnostic of — I've grepped the entire kernel tree to be 100% sure. On GNU/Linux glibc would be responsible for determining a file's MIME type and forwarding the request to the kernel. That said, I don't really know how it could be possible without breaking the existing API/ABI (e.g. memory attributes passed with heap allocation) or at least rewriting all the applications involved with such an arbitrary caching restriction. And I'm not even talking about the interface to such a feature.

I might be very wrong though. But...

1clue wrote:
That of course would apply to any other app that uses video files, but doing this at the kernel level entirely would require some number of bytes to be cached, where a known data stream which should not be cached would be easily recognizable by the user app.

Yes I understand that would mean that all the user apps which deal with these big files would have to get on board, and it would be like herding cats.

... even all this is just *one* particular case in which caching is not desired... based on something that is not necessarily a problem. The kernel team would, IMHO, see little to no incentive to implement such a mechanism as
  • as.gentoo's case (at least so far) is one over many and
  • this is a so special case that it's not really proven there's an issue with this and
  • there's no data collected on a wide audience basis (at least not that I'm aware of) that would confirm there's even an issue about buffering/caching and
  • most importantly the kernel developers can be trusted to know better than anyone else what to do with caching, when and how.
I'm not trying to minimize as.gentoo's case rather than thinking aloud why implementing such an ignore-page-based-on-MIME-type has almost no chance to be implemented and even less to be a kernel feature.

But I might be wrong and I stand to be corrected if so.
_________________
Gentoo addict: tomorrow I quit, I promise!... Just one more emerge...
1739!
Back to top
View user's profile Send private message
1clue
Advocate
Advocate


Joined: 05 Feb 2006
Posts: 2562

PostPosted: Tue Aug 21, 2018 7:37 pm    Post subject: Reply with quote

VinzC wrote:
1clue wrote:
What we really need is for browsers to ask the kernel to inhibit caching by mime type.

Hmm... that's unlikely to happen. MIME type is an operating system concept, which the kernel [fortunately] is unaware/agnostic of — I've grepped the entire kernel tree to be 100% sure. On GNU/Linux glibc would be responsible for determining a file's MIME type and forwarding the request to the kernel. That said, I don't really know how it could be possible without breaking the existing API/ABI (e.g. memory attributes passed with heap allocation) or at least rewriting all the applications involved with such an arbitrary caching restriction. And I'm not even talking about the interface to such a feature.

I might be very wrong though. But...


I never thought there would be a recognition of mime type. Nor do I believe there is a way to tell the kernel not to cache a certain data stream. Only commenting that it would be helpful in a situation like this. AFAIK it would require a new design change in the kernel. And may introduce a whole lot of security concerns.

Quote:

1clue wrote:
That of course would apply to any other app that uses video files, but doing this at the kernel level entirely would require some number of bytes to be cached, where a known data stream which should not be cached would be easily recognizable by the user app.

Yes I understand that would mean that all the user apps which deal with these big files would have to get on board, and it would be like herding cats.

... even all this is just *one* particular case in which caching is not desired... based on something that is not necessarily a problem. The kernel team would, IMHO, see little to no incentive to implement such a mechanism as
  • as.gentoo's case (at least so far) is one over many and
  • this is a so special case that it's not really proven there's an issue with this and
  • there's no data collected on a wide audience basis (at least not that I'm aware of) that would confirm there's even an issue about buffering/caching and
  • most importantly the kernel developers can be trusted to know better than anyone else what to do with caching, when and how.
I'm not trying to minimize as.gentoo's case rather than thinking aloud why implementing such an ignore-page-based-on-MIME-type has almost no chance to be implemented and even less to be a kernel feature.

But I might be wrong and I stand to be corrected if so.


Not trying to correct anyone. I knew this is an alien concept and might have extremely limited practical use.

However I imagine that there are a number of situations where an app would process large quantities of data and would know that this data need not be cached. As the OP said, RAM is very cheap now. Lots of systems process large streams which would not benefit from caching, and caching of that stream data would likely interfere with other caching which may be beneficial. I'm reiterating what has already been said on this thread so I'll stop that.

My only point in posting was that it would be interesting to have a kernel hint that a stream might be unsuitable for caching when the file opens.
Back to top
View user's profile Send private message
VinzC
Watchman
Watchman


Joined: 17 Apr 2004
Posts: 5021
Location: Dark side of the mood

PostPosted: Tue Aug 21, 2018 8:22 pm    Post subject: Reply with quote

1clue wrote:
As the OP said, RAM is very cheap now [...]

Yes. *now* .

Beware of generalizing. RAM might be not so cheap for everyone and everywhere on this planet. Linux runs on far more machines than the ones we might think of, be it from huge server farms or supercomputers to the smallest OLPC in Africa or in Thailand, the biggest router and appliances or the smallest in-pocket hub-modem or whatnot. Think of the bigger picture.

RAM being cheap is irrelevant because there are still machines which run Linux with a small amount of RAM and which are still being maintained. The Linux kernel history hasn't started today — even if it's obvious, things like "RAM is cheap" is forgetting that every hardware that runs Linux is not necessarily 1) a PC like and 2) like the one we have 3) now and 4) will always be. This use case is, I repeat, just another use case amongst many others. What seems to fit like a glove here just might not elsewhere.

All of this to say: don't be so prompt on generalizing ;-) .
_________________
Gentoo addict: tomorrow I quit, I promise!... Just one more emerge...
1739!
Back to top
View user's profile Send private message
as.gentoo
Guru
Guru


Joined: 07 Aug 2004
Posts: 319

PostPosted: Thu Aug 23, 2018 9:41 pm    Post subject: Reply with quote

Quote:
"It's normal operation for old pages to be kicked from the page cache when something new comes along. Why is that a problem?"
It is a problem because each time I "use" a big file then other data which is used often is thrown out. This makes the caching - keep often used data in a faster storage place than on HDD/SSD - obsolete. It's like doing a 'drop caches' each time I used zpaq/lrzip, edit or watch a large video and so on. The data in the cache is worthless after doing that one action with a big file. And at least for me that happens often! I'm not sure how to describe this better.
In comparison, ZFS keeps track if a file (it's actually blocks) is used often and keeps it in it's cache - even when a one-time-used big file comes along. If the big file doesn't fit into the remaining "free" cache it is not fully stored so that cache-worthy data stays in the cache - all the time!
Quote:
I read this: https://pthree.org/2012/12/07/zfs-administration-part-iv-the-adjustable-replacement-cache/ section "Traditional Caches" and it made me think that the page cache uses the LRU algorithm. Maybe the algorithm was extended by the kernel devs and does some kind of statistics - then maybe there is a way to influence what is cached and what not (beyond vmtouch).
Quote:
As mentioned above, the kernel should recognise reading a file sequentially, so its likely only leaving it in cache 'cos either that algorithm's not working here
I thought this does not refer to the page cache. See above, but I'm open to read a document that states that the sequential-data-handling does refer to the page cache as well - please point to the right place!
And I like to keep browser tabs open for fast access, thats a matter of choice…
Quote:
Whereas you might pause and backspace a video to see "did that really happen?"
I wonder if that was really ment as an argument. It's better to read that one or two time rewind backwards data from the drive than other data that's really used oft…
I didn't change my swappiness setting so it's the default. Noting seems to be swapped out, at least when looking at the size firefox growing (and staying) to/at 4G RAM. When I restart ffox it uses app. 1G and grows when I use it.
Quote:
Do you (as.gentoo) use FUSE? I mean is all your question related to FUSE from the start?
No, FUSE was an example where you can influence cache usage so there are ways to influence caching with the "right" configuration (on kernel-, FS- or app-level). You can obviously do that with vmtouch for the page cache. So there must have been a reason why somebody created that program.
Quote:
most importantly the kernel developers can be trusted to know better than anyone else what to do with caching, when and how.
So every "(not-)new" idea is implemented? No space for "we have to do more important things now" or "(for us) plain LRU is good enough"?

I asked if there is a way to influence the cache in the way I think it makes sense. If the answer "no" (or "yes, you need to do…") then let me hear it. That would help me most now. Just as I wrote the behaviour I'd like to see is implemented in at least ZFS - so it is neither my idea nor a senseless idea!
You can find more algorithms for page caching here: https://en.wikipedia.org/wiki/Page_replacement_algorithm . I bet it wasn't from boredom that somebody took the time and elaborate. No, I'm not the author.
Quote:
And that's true for ZFS — you know, licensing... *facepalm*
I have absolutely no clue, which role licensing does play here!
Quote:
As mentioned above, the kernel should recognise reading a file sequentially, so its likely only leaving it in cache 'cos either that algorithm's not working here, or there's no pressure to use the other stuff cached.
Should is not important here, only "is" or "is not" does matter now.
Back to top
View user's profile Send private message
1clue
Advocate
Advocate


Joined: 05 Feb 2006
Posts: 2562

PostPosted: Fri Aug 24, 2018 1:28 am    Post subject: Reply with quote

VinzC wrote:
1clue wrote:
As the OP said, RAM is very cheap now [...]

Yes. *now* .

Beware of generalizing. RAM might be not so cheap for everyone and everywhere on this planet. Linux runs on far more machines than the ones we might think of, be it from huge server farms or supercomputers to the smallest OLPC in Africa or in Thailand, the biggest router and appliances or the smallest in-pocket hub-modem or whatnot. Think of the bigger picture.

RAM being cheap is irrelevant because there are still machines which run Linux with a small amount of RAM and which are still being maintained. The Linux kernel history hasn't started today — even if it's obvious, things like "RAM is cheap" is forgetting that every hardware that runs Linux is not necessarily 1) a PC like and 2) like the one we have 3) now and 4) will always be. This use case is, I repeat, just another use case amongst many others. What seems to fit like a glove here just might not elsewhere.

All of this to say: don't be so prompt on generalizing ;-) .


IMO you're generalizing in the other direction.

Most people don't have this much RAM, so this issue does not come up for them.

The scenario being described here comes about because the OP has a large amount of RAM compared to most desktops. However modern server hardware RAM specs would suggest that this is a nice amount of RAM, and server hardware is readily available which can use multiple terabytes. What I was saying is that this situation seems uncommon to a bunch of people building desktop systems but considering the available hardware it's not a reach to think that it may be more common than we would first guess.
Back to top
View user's profile Send private message
Goverp
l33t
l33t


Joined: 07 Mar 2007
Posts: 706

PostPosted: Fri Aug 24, 2018 8:14 am    Post subject: Reply with quote

as.gentoo wrote:
Quote:
"It's normal operation for old pages to be kicked from the page cache when something new comes along. Why is that a problem?"
It is a problem because each time I "use" a big file then other data which is used often is thrown out. This makes the caching - keep often used data in a faster storage place than on HDD/SSD - obsolete. It's like doing a 'drop caches' each time I used zpaq/lrzip, edit or watch a large video and so on. The data in the cache is worthless after doing that one action with a big file. And at least for me that happens often! I'm not sure how to describe this better.
...

To repeat my question, why is this a problem? What are the symptoms? If it's that you don't like the numbers, there's an easy fix involving awk :-)
I presume the issue is degraded response time, throughput or performance, but you don't say. If not, then it's just that you don't like the algorithm. Fine, write a different one.
If it is one of those, could I suggest trying swappiness = 30? I've no experience of playing with it, but it's a trivial thing to change and see if it makes a difference.

I'm also confused as to why, if ZFS does this the way you want, you think it's a kernel issue. Surely, that implies it's a file system issue.
_________________
Greybeard
Back to top
View user's profile Send private message
VinzC
Watchman
Watchman


Joined: 17 Apr 2004
Posts: 5021
Location: Dark side of the mood

PostPosted: Fri Aug 24, 2018 9:20 am    Post subject: Reply with quote

VinzC wrote:
And that's true for ZFS — you know, licensing... *facepalm*
as.gentoo wrote:
I have absolutely no clue, which role licensing does play here!

I was referring to why ZFS is not — and won't be — included in the kernel and the main reason is license incompatibilities between ZFS and the Linux kernel (I summarize). This is also why [legally] using ZFS under GNU/Linux is mostly through FUSE... with the drawbacks I exposed.

VinzC wrote:
most importantly the kernel developers can be trusted to know better than anyone else what to do with caching, when and how.

as.gentoo wrote:
So every "(not-)new" idea is implemented? No space for "we have to do more important things now" or "(for us) plain LRU is good enough"?

:wink:

Well, not quite what I meant. Just that in the matter of caching data from disks we can trust kernel developers for their implementations (with the constraints that are theirs, of course).

However I still fail to understand why it is an important matter to you, other than « because it's implemented in ZFS » and that you consider that implementation worth paying attention to. I'm confident that you probably have a strong technical ground but do you have data to backup your claims? Do/did you see an impact on performance with the current state of caching algorithms you're pointing at?

as.gentoo wrote:
I asked if there is a way to influence the cache in the way I think it makes sense. If the answer "no" (or "yes, you need to do…") then let me hear it. That would help me most now.

I for sure don't have the answer to this question. OTOH I'd expect this discussion to be far more profitable to you if you submitted your thoughts to the kernel/filesystem development list directly.
_________________
Gentoo addict: tomorrow I quit, I promise!... Just one more emerge...
1739!
Back to top
View user's profile Send private message
tholin
Apprentice
Apprentice


Joined: 04 Oct 2008
Posts: 177

PostPosted: Fri Aug 24, 2018 11:01 am    Post subject: Reply with quote

as.gentoo wrote:
In comparison, ZFS keeps track if a file (it's actually blocks) is used often and keeps it in it's cache - even when a one-time-used big file comes along. If the big file doesn't fit into the remaining "free" cache it is not fully stored so that cache-worthy data stays in the cache - all the time!

I don't know a lot about zfs. Perhaps it keeps this information on disk so it persists across reboots? If not, what you describe is very similar to what linux is doing. The page cache is not just a dumb LRU. I takes into account how many times a page has been requested and prioritize that page.

Let's try a real life example. I have a script for grepping all my *.srt (subtitle) files for a given string. I use that for language studies. There are thousands of small *.srt files so I would like to keep them in cache if possible.

Code:
# echo 3 > /proc/sys/vm/drop_caches
$ time ls_vocab lolololololol

real    0m14.314s
user    0m0.746s
sys     0m0.899s
$ time ls_vocab lolololololol

real    0m1.044s
user    0m0.630s
sys     0m0.735s
$ fincore *srt
  RES PAGES   SIZE FILE
 100K    25  97.6K 120420 Gachi-Gase ep01.srt
  60K    15  56.9K 120427 Gachi-Gase ep02.srt
  48K    12  46.5K 120511 Gachi-Gase ep03.srt
  60K    15  57.1K 120518 Gachi-Gase ep04.srt
  52K    13  49.2K 120525 Gachi-Gase ep05.srt
  48K    12  46.2K 120601 Gachi-Gase ep06.srt
cat ../Video/Movies/Awesome.Movie.BluRay.1080p.DTS.x264.mkv > /dev/null
fincore ../Video/Movies/Awesome.Movie.BluRay.1080p.DTS.x264.mkv
  RES   PAGES  SIZE FILE
11.4G 2975326 17.2G ../Video/Movies/Awesome.Movie.BluRay.1080p.DTS.x264.mkv
$ fincore *srt
  RES PAGES   SIZE FILE
 100K    25  97.6K 120420 Gachi-Gase ep01.srt
  60K    15  56.9K 120427 Gachi-Gase ep02.srt
  48K    12  46.5K 120511 Gachi-Gase ep03.srt
  60K    15  57.1K 120518 Gachi-Gase ep04.srt
  52K    13  49.2K 120525 Gachi-Gase ep05.srt
  48K    12  46.2K 120601 Gachi-Gase ep06.srt
$ time ls_vocab lolololololol

real    0m1.119s
user    0m0.661s
sys     0m0.651s


ls_vocab was slow the first time but after the data was in cache it became faster. I only have 16G of ram so reading that big movie should have replaced all caches but it didn't. The srt files were still in cache afterwards. That's because I read them twice so the kernel prioritized those pages over the movie that was only read once.

https://youtu.be/xxWaa-lPR-8?t=815 here is a longer explanation how the page cache replacement algorithm works.

All cache replacement algorithms use heuristics (guessing). The kernel can't read minds or time travel so it will always make mistakes. That means what you want to accomplish makes sense and the problem you are describing is real (but not to the extent you think). I don't know of any good solutions besides the hacks you already know of.

as.gentoo wrote:
Quote:
As mentioned above, the kernel should recognise reading a file sequentially, so its likely only leaving it in cache 'cos either that algorithm's not working here, or there's no pressure to use the other stuff cached.
Should is not important here, only "is" or "is not" does matter now.

The page cache replacement algorithm only cares about if a page has been used more than once and how long ago that was. The size of files and the access patterns doesn't really matter. If a file is read sequential it uses more read-ahead so it will evict a bit more of the cache but that difference is insignificant.

1clue wrote:
The scenario being described here comes about because the OP has a large amount of RAM

No, it's the opposite. The less ram you have the more likely it is that some valuable data in ram will be replaced with something less valuable. Small amount of ram means the problem gets worse.
Back to top
View user's profile Send private message
1clue
Advocate
Advocate


Joined: 05 Feb 2006
Posts: 2562

PostPosted: Fri Aug 24, 2018 1:38 pm    Post subject: Reply with quote

IMO what it comes down to is that:

  1. The kernel manages the caches of various things using information it can know about those things.
  2. An application can know more about data it uses than the kernel can in terms of its repeated use.
  3. Having more information about the nature of data allows for better caching algorithms.


The part about "write it yourself" from a few posts up made me think of a filesystem driver partially in user space, with a module that accepts hints from the end-user app but has defaults which make it act just like it does today.

It seems overly complicated and exploitable, but it might be an interesting exercise WRT the problem at hand.

Back in the days when a good programmer could code in assembly language and actually get better performance than the compiler did, we spent a lot of time thinking about code optimization. When compilers had inarguably surpassed us, we started to realize that the modules that really needed to be optimized were not necessarily the ones that we thought needed to be optimized. It may be that this issue is really not a big deal, or that it actually is. I can't tell. What's needed is real data on the "as-is" and "modified" scenarios.
Back to top
View user's profile Send private message
as.gentoo
Guru
Guru


Joined: 07 Aug 2004
Posts: 319

PostPosted: Fri Aug 24, 2018 1:41 pm    Post subject: Reply with quote

Goverp wrote:
as.gentoo wrote:
Quote:
"It's normal operation for old pages to be kicked from the page cache when something new comes along. Why is that a problem?"
It is a problem because each time I "use" a big file then other data which is used often is thrown out. This makes the caching - keep often used data in a faster storage place than on HDD/SSD - obsolete. It's like doing a 'drop caches' each time EDIT: each time before I use] zpaq/lrzip, edit or watch a large video and so on. The data in the cache is worthless after doing that one action with a big file. And at least for me that happens often! I'm not sure how to describe this better.
...

To repeat my question, why is this a problem? What are the symptoms? If it's that you don't like the numbers, there's an easy fix involving awk :-)
I presume the issue is degraded response time, throughput or performance, but you don't say.
As I stated above there are a lot of programs running on that workstation. Having to read the same files from the drive again and again for no reason is is most probably the cause for the lags. It's the first place I do look at. I think it would be helpful if data is not read again and again. There are a lot of concurrent reads and writes.
Quote:
If not, then it's just that you don't like the algorithm.

That doesn't refer to my question at all! It's fine to post thoughts about this topic, me and others do/can learn from it but it's a different direction than I'm looking at.
Please! give me an answer for my specific question as well! I want to adjust THIS very screw!
VinzC already told me that (s)he doesn't know - that's fine! Maybe somebody else does.

Quote:
If it's that you don't like the numbers, there's an easy fix involving awk :-)
:-P that would really fix things.

I'm quite sure that lags are caused by waiting for the physical drives which seek a lot (because of concurrent reads and writes).
Apart from that, each time the drives have to seek, read, write they do age. And yes, I had several broken disks yet!
I'd like to know if I can influence the way caching is done or not and if so: how do I do that.
Quote:
If it is one of those, could I suggest trying swappiness = 30? I've no experience of playing with it, but it's a trivial thing to change and see if it makes a difference.
I already took a look into swappiness and set a value…
Quote:
I'm also confused as to why, if ZFS does this the way you want, you think it's a kernel issue. Surely, that implies it's a file system issue.
ZFS does the caching itself. But the part of read-caching "for" all other Linux FS is done by the kernel - it's page cache. And the page cache is a part of the kernel. As soon as the ZFS module is loaded it is a part of the kernel (like ext4… is) - at least from my POV.
And I wrote that I do not want to use ZFS for a single drive unless there is no other (safe) way to have what I want.
The kernel provides a generic way for caching data by the page cache - which is quite nice because FS devs do not have to care for read caching, you get it for free and do not have to care at all.
So I was asking if the page cache can be configured to not cache large files in general.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware All times are GMT
Goto page 1, 2  Next
Page 1 of 2

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum