Date: Thu, 05 Apr 2012 23:54:53 +0400 From: Andrey Zonov <andrey@zonov.org> To: Konstantin Belousov <kostikbel@gmail.com> Cc: alc@freebsd.org, freebsd-hackers@freebsd.org, Alan Cox <alc@rice.edu> Subject: Re: problems with mmap() and disk caching Message-ID: <4F7DF88D.2050907@zonov.org> In-Reply-To: <20120405194122.GC2358@deviant.kiev.zoral.com.ua> References: <4F7B495D.3010402@zonov.org> <20120404071746.GJ2358@deviant.kiev.zoral.com.ua> <4F7DC037.9060803@rice.edu> <4F7DF39A.3000500@zonov.org> <20120405194122.GC2358@deviant.kiev.zoral.com.ua>
next in thread | previous in thread | raw e-mail | index | archive | help
On 05.04.2012 23:41, Konstantin Belousov wrote: > On Thu, Apr 05, 2012 at 11:33:46PM +0400, Andrey Zonov wrote: >> On 05.04.2012 19:54, Alan Cox wrote: >>> On 04/04/2012 02:17, Konstantin Belousov wrote: >>>> On Tue, Apr 03, 2012 at 11:02:53PM +0400, Andrey Zonov wrote: >> [snip] >>>>> This is what I expect. But why this doesn't work without reading file >>>>> manually? >>>> Issue seems to be in some change of the behaviour of the reserv or >>>> phys allocator. I Cc:ed Alan. >>> >>> I'm pretty sure that the behavior here hasn't significantly changed in >>> about twelve years. Otherwise, I agree with your analysis. >>> >>> On more than one occasion, I've been tempted to change: >>> >>> pmap_remove_all(mt); >>> if (mt->dirty != 0) >>> vm_page_deactivate(mt); >>> else >>> vm_page_cache(mt); >>> >>> to: >>> >>> vm_page_dontneed(mt); >>> >> >> Thanks Alan! Now it works as I expect! >> >> But I have more questions to you and kib@. They are in my test below. >> >> So, prepare file as earlier, and take information about memory usage >> from top(1). After preparation, but before test: >> Mem: 80M Active, 55M Inact, 721M Wired, 215M Buf, 46G Free >> >> First run: >> $ ./mmap /mnt/random >> mmap: 1 pass took: 7.462865 (none: 0; res: 262144; super: >> 0; other: 0) >> >> No super pages after first run, why?.. >> >> Mem: 79M Active, 1079M Inact, 722M Wired, 216M Buf, 45G Free >> >> Now the file is in inactive memory, that's good. >> >> Second run: >> $ ./mmap /mnt/random >> mmap: 1 pass took: 0.004191 (none: 0; res: 262144; super: >> 511; other: 0) >> >> All super pages are here, nice. >> >> Mem: 1103M Active, 55M Inact, 722M Wired, 216M Buf, 45G Free >> >> Wow, all inactive pages moved to active and sit there even after process >> was terminated, that's not good, what do you think? > Why do you think this is 'not good' ? You have plenty of free memory, > there is no memory pressure, and all pages were referenced recently. > THere is no reason for them to be deactivated. > I always thought that active memory this is a sum of resident memory of all processes, inactive shows disk cache and wired shows kernel itself. >> >> Read the file: >> $ cat /mnt/random> /dev/null >> >> Mem: 79M Active, 55M Inact, 1746M Wired, 1240M Buf, 45G Free >> >> Now the file is in wired memory. I do not understand why so. > You do use UFS, right ? Yes. > There is enough buffer headers and buffer KVA > to have buffers allocated for the whole file content. Since buffers wire > corresponding pages, you get pages migrated to wired. > > When there appears a buffer pressure (i.e., any other i/o started), > the buffers will be repurposed and pages moved to inactive. > OK, how can I get amount of disk cache? >> >> Could you please give me explanation about active/inactive/wired memory? >> >> >>> because I suspect that the current code does more harm than good. In >>> theory, it saves activations of the page daemon. However, more often >>> than not, I suspect that we are spending more on page reactivations than >>> we are saving on page daemon activations. The sequential access >>> detection heuristic is just too easily triggered. For example, I've seen >>> it triggered by demand paging of the gcc text segment. Also, I think >>> that pmap_remove_all() and especially vm_page_cache() are too severe for >>> a detection heuristic that is so easily triggered. >>> >> [snip] >> >> -- >> Andrey Zonov -- Andrey Zonov
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4F7DF88D.2050907>