Date: Thu, 23 Mar 2006 15:52:25 -0800 From: Bakul Shah <bakul@BitBlocks.com> To: Matthew Dillon <dillon@apollo.backplane.com> Cc: alc@freebsd.org, Mikhail Teterin <mi+mx@aldan.algebra.com>, stable@freebsd.org Subject: Re: Reading via mmap stinks (Re: weird bugs with mmap-ing via NFS) Message-ID: <200603232352.k2NNqPS8018729@gate.bitblocks.com> In-Reply-To: Your message of "Thu, 23 Mar 2006 15:16:11 PST." <200603232316.k2NNGBka068754@apollo.backplane.com>
next in thread | previous in thread | raw e-mail | index | archive | help
> : time fgrep meowmeowmeow /home/oh.0.dump > : 2.167u 7.739s 1:25.21 11.6% 70+3701k 23663+0io 6pf+0w > : time fgrep --mmap meowmeowmeow /home/oh.0.dump > : 1.552u 7.109s 2:46.03 5.2% 18+1031k 156+0io 106327pf+0w > : > :Use a big enough file to bust the memory caching (oh.0.dump above is 2.9Gb), > > :I'm sure, you will have no problems reproducing this result. > > 106,000 page faults. How many pages is a 2.9GB file? If this is running > in 64-bit mode those would be 8K pages, right? So that would come to > around 380,000 pages. About 1:4. So, clearly the operating system > *IS* pre-faulting multiple pages. ... > > In anycase, this sort of test is not really a good poster child for how > to use mmap(). Nobody in their right mind uses mmap() on datasets that > they expect to be uncacheable and which are accessed sequentially. It's > just plain silly to use mmap() in that sort of circumstance. May be the OS needs "reclaim-behind" for the sequential case? This way you can mmap many many pages and use a much smaller pool of physical pages to back them. The idea is for the VM to reclaim pages N-k..N-1 when page N is accessed and allow the same process to reuse this page. Similar to read ahead, where the OS schedules read of page N+k, N+k+1.. when page N is accessed. May be even use TCP algorithms to adjust the backing buffer (window) size:-)
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200603232352.k2NNqPS8018729>