Date: Thu, 30 Jan 2003 19:37:11 -0500 From: "Brian T. Schellenberger" <bschellenberger@nc.rr.com> To: kientzle@acm.org, Matthew Dillon <dillon@apollo.backplane.com> Cc: Sean Hamilton <sh@bel.bc.ca>, hackers@freebsd.org Subject: Re: Random disk cache expiry Message-ID: <200301301937.12407.bschellenberger@nc.rr.com> In-Reply-To: <3E39BE22.8050207@acm.org> References: <000501c2c4dd$f43ed450$16e306cf@slugabed.org> <200301302222.h0UMMfFI090349@apollo.backplane.com> <3E39BE22.8050207@acm.org>
next in thread | previous in thread | raw e-mail | index | archive | help
On Thursday 30 January 2003 07:06 pm, Tim Kientzle wrote: | Matthew Dillon wrote: | > Your idea of 'sequential' access cache restriction only | > | > works if there is just one process doing the accessing. | | Not necessarily. I suspect that there is | a strong tendency to access particular files | in particular ways. E.g., in your example of | a download server, those files are always | read sequentially. You can make similar assertions | about a lot of files: : : For example, if a process | started to read a 10GB file that has historically been | accessed sequentially, you could immediately decide | to enable read-ahead for performance, but also mark | those pages to be released as soon as they were read by the | process. I think you missed Matt's point, which is well-taken: Even if everybody accesses it sequentially, if you have 100 processes accessing it sequentially at the *same* time, then it would be to your benefit to leave the "old" pages around because even though *this* process won't access it again, the *next* process very well might, if it just happens to be reading it sequentially as well but is a little further behind on its sequential read. To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-hackers" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200301301937.12407.bschellenberger>
