Date: Mon, 18 May 2009 19:45:02 -0700 From: Kip Macy <kmacy@freebsd.org> To: Ben Kelly <ben@wanderview.com> Cc: Adam McDougall <mcdouga9@egr.msu.edu>, current@freebsd.org, Larry Rosenman <ler@lerctr.org> Subject: Re: Fatal trap 12: page fault panic with recent kernel with ZFS Message-ID: <3c1674c90905181945g179173b9rb064e8b37ba7148@mail.gmail.com> In-Reply-To: <1F20825F-BD11-40D1-9024-07F6E707DD08@wanderview.com> References: <20090518145614.GF82547@egr.msu.edu> <alpine.BSF.2.00.0905181031240.35767@thebighonker.lerctr.org> <alpine.BSF.2.00.0905181830490.1756@borg> <3c1674c90905181659g1d20f0f1w3f623966ae4440ec@mail.gmail.com> <alpine.BSF.2.00.0905181906001.2008@borg> <20090519012202.GR82547@egr.msu.edu> <3c1674c90905181826p787a346cie90429324444a9c4@mail.gmail.com> <1F20825F-BD11-40D1-9024-07F6E707DD08@wanderview.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, May 18, 2009 at 7:34 PM, Ben Kelly <ben@wanderview.com> wrote: > On May 18, 2009, at 9:26 PM, Kip Macy wrote: >> >> On Mon, May 18, 2009 at 6:22 PM, Adam McDougall <mcdouga9@egr.msu.edu> >> wrote: >>> >>> On Mon, May 18, 2009 at 07:06:57PM -0500, Larry Rosenman wrote: >>> >>> =A0On Mon, 18 May 2009, Kip Macy wrote: >>> >>> =A0> The ARC cache allocates wired memory. The ARC will grow until ther= e is >>> =A0> vm pressure. >>> =A0My crash this AM was with 4G real, and the ARC seemed to grow and gr= ow, >>> then >>> =A0we started paging, and then crashed. >>> >>> =A0Even with the VM pressure it seemed to grow out of control. >>> >>> =A0Ideas? >>> >>> >>> Before that but since 191902 I was having the opposite problem, >>> my ARC and thus Wired would grow up to approx arc_max until my >>> Inactive memory put pressure on ARC making it shrink back down >>> to ~450M where some aspects of performance degraded. =A0A partial >>> workaround was to add a arc_min which isn't entirely successful >>> and I found I could restore ZFS performance by temporarily squeezing >>> down Inactive memory by allocating a bunch of it myself; after >>> freeing that, ARC had no pressure and could grow towards arc_max >>> again until Inactive eventually rose. =A0Reported to Kip last night >>> and some cvs commit lists. =A0I never did run into Swap. >>> >> >> >> That is a separate issue. I'm going to try adding a vm_lowmem event >> handler to drive reclamation instead of the current paging target. >> That shouldn't cause inactive pages to shrink the ARC. > > Isn't there already a vm_lowmem event for the arc that triggers reclamati= on? You're right, there is. I had asked alc if there was a better way than using the paging target and he suggested it. I hadn't looked to see if it was already there because we've had such troubles. > On the low memory front it seems like the arc needs a way to tell the pag= er > to mark some vnodes inactive. =A0I've seen many cases where the arc size > greatly exceeded the target, but it couldn't evict any memory because all > its buffers were still referenced. =A0This seems to behave a little bette= r > with code that increments vm_pageout_deficit and signals the pageout daem= on > when the arc is too far above its target. =A0The normal buffer cache seem= s to > do this as well when its low on memory. Good point. Patches welcome. Otherwise I'll look in to it when I get the ch= ance. Cheers, Kip
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3c1674c90905181945g179173b9rb064e8b37ba7148>