From owner-freebsd-hackers Mon Nov 11 09:25:57 1996 Return-Path: owner-hackers Received: (from root@localhost) by freefall.freebsd.org (8.7.5/8.7.3) id JAA27399 for hackers-outgoing; Mon, 11 Nov 1996 09:25:57 -0800 (PST) Received: from phaeton.artisoft.com (phaeton.Artisoft.COM [198.17.250.211]) by freefall.freebsd.org (8.7.5/8.7.3) with SMTP id JAA27386 for ; Mon, 11 Nov 1996 09:25:52 -0800 (PST) Received: (from terry@localhost) by phaeton.artisoft.com (8.6.11/8.6.9) id KAA18275; Mon, 11 Nov 1996 10:14:57 -0700 From: Terry Lambert Message-Id: <199611111714.KAA18275@phaeton.artisoft.com> Subject: Re: working set model To: cskim@cslsun10.sogang.ac.kr (Kim Chang Seob) Date: Mon, 11 Nov 1996 10:14:57 -0700 (MST) Cc: freebsd-hackers@freebsd.org, cskim@cslsun10.sogang.ac.kr In-Reply-To: <9611110805.AA23163@cslsun10.sogang.ac.kr> from "Kim Chang Seob" at Nov 11, 96 05:05:00 pm X-Mailer: ELM [version 2.4 PL24] MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: owner-hackers@freebsd.org X-Loop: FreeBSD.org Precedence: bulk > i have some question about FreeBSD memory management > i would that know how memory to provide to each process > to minimize the latter's page fault behavior > i know, freebsd memory management does not use the working set medel > because it lacks accurate information about the reference pattern > of a process. This really depends on if you believe LRU works for caching. This, in turn, depends on whether you believe in locality of reference. The theory is that if the buffer and vm cache are the same thing, vm references will change the LRU position, and the locality will be optimized for future hits. That is, you will not really be able to make it more efficient. A working set model is only useful in the case of badly behaved processes. The cannonically worst offender of all time is the SVR4 "ld", which mmap's .o files into memory and traverses the symbol space during linking, instead of building a link graph in memory from the object data. The result is that you will get a disproportionately high amount of locality in the pages mmapped and referenced this way... and other processes data will be forced out of cache as a result. The working set model that makes sense in this case is *not* a per process working set -- it's a per vnode working set. It is relatively trivial to implement and test this change: all you have to do is maintin a buffer count on the number of buffers hung off a vnode, and modify your LRU insertion order on freed buffers for vnodes over quota, and modify reclaimation for allocation of pages for vnodes over quota to steal from the local vnode's LRU instead of the system LRU. Together, these will prevent the working set of a single vnode from growing "too large", causing the LRU locality to break down across context switches. The final (optional) piece would allow priveledged processes to relax their quotas; there are some uses where it's important that a process be efficient at the expense of other processes on the system. I would suggest "madvise" as the best bet, but it would mean taking the memory range specified as a hint to identify the vnode that you want to affect. Regards, Terry Lambert terry@lambert.org --- Any opinions in this posting are my own and not those of my present or previous employers.