Date: Sun, 26 Jan 2003 11:31:12 -0800 (PST) From: Matthew Dillon <dillon@apollo.backplane.com> To: "Sean Hamilton" <sh@bel.bc.ca> Cc: <hackers@FreeBSD.ORG> Subject: Re: Random disk cache expiry Message-ID: <200301261931.h0QJVCp8052101@apollo.backplane.com> References: <000501c2c4dd$f43ed450$16e306cf@slugabed.org>
index | next in thread | previous in thread | raw e-mail
:Greetings,
:
:I have a situation where I am reading large quantities of data from disk
:sequentially. The problem is that as the data is read, the oldest cached
:blocks are thrown away in favor of new ones. When I start re-reading data
:from the beginning, it has to read the entire file from disk again. Is there
:some sort of sysctl which could be changed to induce a more random expiry of
:cached disk blocks? Wouldn't it seem logical to have something like this in
:place at all times?
:
:thanks,
:
:sh
Hi Sean. I've wanted to have a random-disk-cache-expiration feature
for a long time. We do not have one now. We do have mechanisms in
place to reduce the impact of sequential cycling a large dataset so
it does not totally destroy unrelated cached data.
Due to the way our page queues work, it's not an easy problem to solve.
You might be able to simulate more proactive cache control by using
O_DIRECT reads for some of the data, and normal reads for the rest of
the data (see the 'fcntl' manual page). But it might be better simply
to purchase more memory, purchase a faster hard drive, or stripe two
hard drives together. HDs these days can do 25-50MB/s each so striping
two together should yield 50-100MB/s of sequential throughput. See
the 'STRIPING DISKS' section in 'man tuning'.
-Matt
Matthew Dillon
<dillon@backplane.com>
To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message
help
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200301261931.h0QJVCp8052101>
