Date: Mon, 24 Mar 2003 17:59:36 -0800 From: Wes Peters <wes@softweyr.com> To: Dan Nelson <dnelson@allantgroup.com>, Poul-Henning Kamp <phk@phk.freebsd.dk> Cc: freebsd-arch@FreeBSD.ORG Subject: Re: Patch to protect process from pageout killing Message-ID: <200303241759.36410.wes@softweyr.com> In-Reply-To: <20030324213519.GA63147@dan.emsphone.com> References: <200303240823.48262.wes@softweyr.com> <7019.1048523782@critter.freebsd.dk> <20030324213519.GA63147@dan.emsphone.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Monday 24 March 2003 13:35, Dan Nelson wrote: > In the last episode (Mar 24), Poul-Henning Kamp said: > > In message <200303240823.48262.wes@softweyr.com>, Wes Peters writes: > > > As promised, here's the patch to protect a process from being > > > killed when pageout is in memory shortage. This allows a process > > > to specify that it is important enough to be skipped when pageout > > > is looking for the largest process to kill. > > > > > > My needs are simple. We make a box that is a web proxy and runs > > > from a memory disk, using flash for permanent storage. The flash > > > is mounted only when a configuration write is needed, the box > > > runs from the memory disk. We've experienced a problem at > > > certain customer sites where bind will consume a lot (~30 MB) of > > > ram and then pageout will kill the largest process, which is > > > usually either named or squid. This pretty much kills the box. > > > We'd much rather have pageout kill off some of the squid worker > > > processes, we can recover from that. > > > > > > Is this a good approach to the problem? Feedback welcome. > > > > I can certainly see the point, but I'm not sure this is the way. > > > > I am not sure that we want to use the resource limits facility for > > booleans, some of the logic sourounding the suser checks may not > > hold tight. > > How about changing the kill logic to look at RLIMIT_RSS? The process > exceeding its limit by the largest amount gets killed. That way you > can exempt certain processes by raising their limit. Set named's > limit to say 10MB, and when memory gets tight the system will see > it's exceeding its quota by 20MB and kill it first. Mostly because it's not possible to predict what named's RSS will be in any particular customer installation. The ones that raised this issue were at 32MB and stable, and took about 9 days to get there. We don't want named (or squid) to die under ANY circumstances; if the box can't run both named and squid it's effectively a brick. On the other hand, we have lots (hundreds) of other smaller processes running, any one of which is expendable and can be recovered from. Yeah, better ability to adapt to the (memory) load would be perhaps a better way to do this, but I really hate the idea of dumping named on it's head and restarting 9 days of learning just because we're getting hammered by people checking the weather and traffic before heading home. -- "Where am I, and what am I doing in this handbasket?" Wes Peters wes@softweyr.com To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-arch" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200303241759.36410.wes>