Date: Fri, 12 May 2006 18:44:27 +0300 From: Iasen Kostov <tbyte@otel.net> To: Mike Silbersack <silby@silby.com> Cc: FreeBSD Hackers <freebsd-hackers@freebsd.org> Subject: Re: Heavy system load by pagedaemon Message-ID: <1147448667.99925.11.camel@DraGoN.OTEL.net> In-Reply-To: <20060512102305.T1879@odysseus.silby.com> References: <1147264089.51661.10.camel@DraGoN.OTEL.net> <1147264379.51661.14.camel@DraGoN.OTEL.net> <1147265038.51661.19.camel@DraGoN.OTEL.net> <1147361590.33341.19.camel@DraGoN.OTEL.net> <20060512071711.GA714@turion.vk2pj.dyndns.org> <1147428461.98918.10.camel@DraGoN.OTEL.net> <20060512112809.GD714@turion.vk2pj.dyndns.org> <1147437061.98918.24.camel@DraGoN.OTEL.net> <20060512102305.T1879@odysseus.silby.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, 2006-05-12 at 10:27 -0500, Mike Silbersack wrote: > On Fri, 12 May 2006, Iasen Kostov wrote: > > > Exactly what i did :). I set vm.pmap.shpgperproc=600 in loader.conf and > > about 5 min after boot the system paniced and I was not able even to see > > the message (either because I was pressing enter for the command or it > > just doesn't wait for a key). Then i set it to 500 in loader at boot > > time and currently it works but when it crashed used PV entries were ~4 > > 300 000 now they go to ~5 000 000 and it doesn't panic. Which make me > > think that the panic is not related to setting vm.pmap.shpgperproc to > > 600 (which could probably lead to KVA exhastion) but to something else. > > I'll try to increase KVA_PAGES (why isn't there tunable ?) and then set > > vm.pmap.shpgperproc to some higher value, but this will be after a fresh > > make world (I cvsuped already :( ) some time soon. > > Can you provide instructions on how to create a testbench that exhibits > these same problems? Can eAccelerator + PHP + Apache + some simple script > + apachebench do the trick? > Nope, apache probaly needs to use many pages of shared memory to exhaust the PV Entries (as I understand it). eAccelerator uses shm when it has something to put there and most porbably apache does the same. So I think You'll need a lot of different scripts (and many apache processes) to make eAccelerator cache them and probaly some other media to make apache use shm on it own (I'm realy not sure how apache uses shared memory but it probably does because this problem apears when people are using forking apache). > If so, that would allow other people to work on the problem. Kris > Kennaway seems to like benchmarking; maybe you could pry him temporarily > away from MySQL benchmarking to take a look at this. > > Also note that Peter Wemm has been reducing the size of PV Entries in > -current, as he was running out of KVA due to them too - maybe he could > provide you with a patch for 6.x with the same feature. Here's part of > his description of the change: > > --- > This is important because of the following scenario. If you have a 1GB > file (262144 pages) mmap()ed into 50 processes, that requires 13 million > pv entries. At 24 bytes per pv entry, that is 314MB of ram and kvm, while > at 12 bytes it is 157MB. A 157MB saving is significant. > --- > That's realy nice to hear. Interesting thing is that: sysctl vm.zone | grep PV PV ENTRY: 48, 5114880, 4039498, 564470, 236393602 PV Entry's size is 48 here which is even worst than 28 case ... :) Regards.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1147448667.99925.11.camel>