From owner-freebsd-questions@FreeBSD.ORG Wed Sep 3 02:09:57 2003 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 73E9516A4C0 for ; Wed, 3 Sep 2003 02:09:57 -0700 (PDT) Received: from sferics.mongueurs.net (sferics.mongueurs.net [81.80.147.197]) by mx1.FreeBSD.org (Postfix) with ESMTP id DD98743FCB for ; Wed, 3 Sep 2003 02:09:55 -0700 (PDT) (envelope-from david@landgren.net) Received: from landgren.net (81-80-147-206.bpinet.com [81.80.147.206]) by sferics.mongueurs.net (Postfix) with ESMTP id 9D53BA95B; Wed, 3 Sep 2003 11:09:52 +0200 (CEST) Message-ID: <3F55AEDF.8000804@landgren.net> Date: Wed, 03 Sep 2003 11:05:35 +0200 From: David Landgren Organization: A thousand golden eyes are watching User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.5a) Gecko/20030718 X-Accept-Language: en-us, en MIME-Version: 1.0 To: Irvine Short References: <20030827133322.X83260@fling.sanbi.ac.za> <3F4CE8D2.6010605@landgren.net> <20030827210405.E28625@fling.sanbi.ac.za> In-Reply-To: <20030827210405.E28625@fling.sanbi.ac.za> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit cc: freebsd-questions@freebsd.org Subject: Huge processes (was: Re: Large memory issues) X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 03 Sep 2003 09:09:57 -0000 Irvine Short wrote: > On Wed, 27 Aug 2003, David Landgren wrote: > > >>Irvine Short wrote: >> >>>I then found that this: >>>options MAXDSIZ="(2048*1024*1024)" >>>options MAXSSIZ="(128*1024*1024)" (and also 64MB) >>>options DFLDSIZ="(512*1024*1024)" >>> >>>worked fine but not as expected - limit reports datasize unlimited. >> >>I've managed to crank it up as far as >> >>options MAXDSIZ="(3568*1024*1024)" > > > Cool! Although I tried 3500 & it blew up too... Bah! Just for the record, I read those figures from a discarded kernel configuration file, and the above values (3568MB max data size for a process) don't work. The highest value I've been able to boot correctly with is (only) 2816Mb. There's probably a few more megabytes that can be eked out, but I'm pretty sure that 3072 fails. The error is something to do with the kernel being able to map the largest-sized process into memory. It blows up at boot-time with some sort of vmalloc or kmalloc error. I searched the archives when I was working on this, and came across a kernel developer who replied to someone having simuilar difficulties and they said "why would you want to do something like that?" His reasoning being that the left-over memory would be better used by the OS for caching and buffering anyway. I don't really consider that as a good answer, at least not for modern machines with large amounts of RAM. I have a 4Gb RAM server running Squid, and nothing else. Squid represents a very specialised problem domain and has elaborate algorithms to decide what to keep in RAM, and what, and when, to write out to disk. Much more so than the OS, which is tuned to deal with the common case. When it's time for an object to be written out to disk, it should be written out quickly, so that the RAM can be freed up and given to something else more deserving of being cached. As it happens, the SCSI controller has a large slab of RAM on it too, so there's even less point for the OS to hold onto it for too long in disk buffers. As it is, I never see the Cache and Buf values in top(1) rise about 85M and 199M respectively. I take that to mean that the OS isn't using the extra memory either. I've looked around at sysctl settings and the source and the documentation suggests that modifying anything to do with VM settings is akin to meddling with the affairs of wizards. This seems to me then, when adding in another 20Mb for sundry housekeeping processes, that just under a gig of RAM is going wasted. I could easily cache another 50000 web objects *in RAM* if I could make it available to Squid. So if there's something that can be done about huge maximum process sizes, I'd love to hear about it. I'd *really* like to be able to have a 3.5Gb process in a 4Gb machine. 512Mb ought to be enough for everything else. I've become skilled in not running anything else on that server that could possibly chew up RAM and upset Squid. David