From owner-freebsd-hackers@FreeBSD.ORG Fri May 12 15:44:35 2006 Return-Path: X-Original-To: freebsd-hackers@freebsd.org Delivered-To: freebsd-hackers@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 6631516A60C for ; Fri, 12 May 2006 15:44:35 +0000 (UTC) (envelope-from tbyte@otel.net) Received: from mail.otel.net (gw3.OTEL.net [212.36.8.151]) by mx1.FreeBSD.org (Postfix) with ESMTP id DFD8943D49 for ; Fri, 12 May 2006 15:44:34 +0000 (GMT) (envelope-from tbyte@otel.net) Received: from dragon.otel.net ([212.36.8.135]) by mail.otel.net with esmtp (Exim 4.60 (FreeBSD)) (envelope-from ) id 1FeZoZ-000JQF-ON; Fri, 12 May 2006 18:44:28 +0300 From: Iasen Kostov To: Mike Silbersack In-Reply-To: <20060512102305.T1879@odysseus.silby.com> References: <1147264089.51661.10.camel@DraGoN.OTEL.net> <1147264379.51661.14.camel@DraGoN.OTEL.net> <1147265038.51661.19.camel@DraGoN.OTEL.net> <1147361590.33341.19.camel@DraGoN.OTEL.net> <20060512071711.GA714@turion.vk2pj.dyndns.org> <1147428461.98918.10.camel@DraGoN.OTEL.net> <20060512112809.GD714@turion.vk2pj.dyndns.org> <1147437061.98918.24.camel@DraGoN.OTEL.net> <20060512102305.T1879@odysseus.silby.com> Content-Type: text/plain Date: Fri, 12 May 2006 18:44:27 +0300 Message-Id: <1147448667.99925.11.camel@DraGoN.OTEL.net> Mime-Version: 1.0 X-Mailer: Evolution 2.6.1 FreeBSD GNOME Team Port Content-Transfer-Encoding: 7bit Cc: FreeBSD Hackers Subject: Re: Heavy system load by pagedaemon X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 12 May 2006 15:44:35 -0000 On Fri, 2006-05-12 at 10:27 -0500, Mike Silbersack wrote: > On Fri, 12 May 2006, Iasen Kostov wrote: > > > Exactly what i did :). I set vm.pmap.shpgperproc=600 in loader.conf and > > about 5 min after boot the system paniced and I was not able even to see > > the message (either because I was pressing enter for the command or it > > just doesn't wait for a key). Then i set it to 500 in loader at boot > > time and currently it works but when it crashed used PV entries were ~4 > > 300 000 now they go to ~5 000 000 and it doesn't panic. Which make me > > think that the panic is not related to setting vm.pmap.shpgperproc to > > 600 (which could probably lead to KVA exhastion) but to something else. > > I'll try to increase KVA_PAGES (why isn't there tunable ?) and then set > > vm.pmap.shpgperproc to some higher value, but this will be after a fresh > > make world (I cvsuped already :( ) some time soon. > > Can you provide instructions on how to create a testbench that exhibits > these same problems? Can eAccelerator + PHP + Apache + some simple script > + apachebench do the trick? > Nope, apache probaly needs to use many pages of shared memory to exhaust the PV Entries (as I understand it). eAccelerator uses shm when it has something to put there and most porbably apache does the same. So I think You'll need a lot of different scripts (and many apache processes) to make eAccelerator cache them and probaly some other media to make apache use shm on it own (I'm realy not sure how apache uses shared memory but it probably does because this problem apears when people are using forking apache). > If so, that would allow other people to work on the problem. Kris > Kennaway seems to like benchmarking; maybe you could pry him temporarily > away from MySQL benchmarking to take a look at this. > > Also note that Peter Wemm has been reducing the size of PV Entries in > -current, as he was running out of KVA due to them too - maybe he could > provide you with a patch for 6.x with the same feature. Here's part of > his description of the change: > > --- > This is important because of the following scenario. If you have a 1GB > file (262144 pages) mmap()ed into 50 processes, that requires 13 million > pv entries. At 24 bytes per pv entry, that is 314MB of ram and kvm, while > at 12 bytes it is 157MB. A 157MB saving is significant. > --- > That's realy nice to hear. Interesting thing is that: sysctl vm.zone | grep PV PV ENTRY: 48, 5114880, 4039498, 564470, 236393602 PV Entry's size is 48 here which is even worst than 28 case ... :) Regards.