From owner-freebsd-questions Fri Aug 11 14:24:04 1995 Return-Path: questions-owner Received: (from majordom@localhost) by freefall.FreeBSD.org (8.6.11/8.6.6) id OAA11489 for questions-outgoing; Fri, 11 Aug 1995 14:24:04 -0700 Received: from diamond.sierra.net (diamond.sierra.net [204.94.39.235]) by freefall.FreeBSD.org (8.6.11/8.6.6) with SMTP id OAA11482 for ; Fri, 11 Aug 1995 14:24:00 -0700 Received: from martis-d225.sierra.net by diamond.sierra.net with SMTP id AA06475 (5.67b8/IDA-1.5 for ); Fri, 11 Aug 1995 14:22:51 -0700 Message-Id: <199508112122.AA06475@diamond.sierra.net> From: "Jim Howard" To: davidg@root.com, Marc Ramirez , questions@freebsd.org Date: Fri, 11 Aug 1995 12:53:27 -0800 Subject: Re: VM question Reply-To: jiho@sierra.net Priority: normal X-Mailer: Pegasus Mail/Windows (v1.22) Sender: questions-owner@freebsd.org Precedence: bulk > >DG> If you are using 15MB of virtual memory then over time the system can > >DG> potentially page out a large part of that in favor of file caching. > > > >Is there any way we can wire down the maximum percentage of memory used > >by the sytem for file caching, ala the old way of specifying 10% of > >memory for caching, except that this would be an upper limit? > > Not currently. FreeBSD uses all of free memory for caching, and if there > isn't any free memory, it will use "cache pages" and kick the pagedaemon. > > -DG I THINK that clears up the last of MY questions about why people are running out of swap. It looks to me like with 2.0.5 and later you've got a chicken-and-egg problem, where the dynamic buffer cache and files in core kind of chase each other around in a circle. The more files you load the more they fill RAM, while the buffer caches them and fills RAM still more. If the buffer cache is able to generate swap in order to make room for its own growth, it may be kicking out LRU (least recently used) stuff, but if the point is to avoid drive access, generating swap (a form of drive access which can also crash programs when there is no more) sounds like it wouldn't be everyone's choice in all circumstances. I, being simplistic and naive, would have assumed that the buffer cache would stop growing and start to shrink once RAM filled up, giving up ground rather that generating swap to gain it. I assume, of course, that your algorithms derive from work done to discover the best trade-offs for heavily loaded server systems. Most of us concerned about this problem are desktop users, I think. I'm still running 2.0, which with my 8 MB of RAM allocates a static buffer cache of just under 1 MB, and I have 16 MB of swap partition. Yet my results sound similar to people with 32 MB of RAM running 2.0.5 with the dynamic cache. But I briefly had the 2.0.5 kernel built and running on my 2.0 installation (briefly, because numerous incompatibilities kept it from being very useful), and I found that I could do more before running out of swap, as if I had gained about a MB of swap space (the amount of static cache under the 2.0 kernel). Perhaps most of the RAM usage in this case involved allocated memory, rather than (cached) files. You must feel like heavily loaded servers yourselves, sometimes.