From owner-freebsd-hackers@FreeBSD.ORG Sat Jun 9 16:27:13 2012 Return-Path: Delivered-To: freebsd-hackers@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3D79D1065678 for ; Sat, 9 Jun 2012 16:27:13 +0000 (UTC) (envelope-from freebsd@damnhippie.dyndns.org) Received: from qmta13.emeryville.ca.mail.comcast.net (qmta13.emeryville.ca.mail.comcast.net [76.96.27.243]) by mx1.freebsd.org (Postfix) with ESMTP id 1F74D8FC18 for ; Sat, 9 Jun 2012 16:27:13 +0000 (UTC) Received: from omta08.emeryville.ca.mail.comcast.net ([76.96.30.12]) by qmta13.emeryville.ca.mail.comcast.net with comcast id LGQR1j0040FhH24ADGT74K; Sat, 09 Jun 2012 16:27:07 +0000 Received: from damnhippie.dyndns.org ([24.8.232.202]) by omta08.emeryville.ca.mail.comcast.net with comcast id LGT51j00d4NgCEG8UGT6xF; Sat, 09 Jun 2012 16:27:06 +0000 Received: from [172.22.42.240] (revolution.hippie.lan [172.22.42.240]) by damnhippie.dyndns.org (8.14.3/8.14.3) with ESMTP id q59GR3e0028836; Sat, 9 Jun 2012 10:27:04 -0600 (MDT) (envelope-from freebsd@damnhippie.dyndns.org) From: Ian Lepore To: Wojciech Puchar In-Reply-To: References: Content-Type: text/plain; charset="us-ascii" Date: Sat, 09 Jun 2012 10:27:03 -0600 Message-ID: <1339259223.36051.328.camel@revolution.hippie.lan> Mime-Version: 1.0 X-Mailer: Evolution 2.32.1 FreeBSD GNOME Team Port Content-Transfer-Encoding: 7bit Cc: freebsd-hackers@freebsd.org Subject: Re: wired memory - again! X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 09 Jun 2012 16:27:13 -0000 On Sat, 2012-06-09 at 09:21 +0200, Wojciech Puchar wrote: > top reports wired memory 128MB > > > WHERE it is used? below results of vmstat -m and vmstat -z > values does not sum up even to half of it > FreeBSD 9 - few days old. > > What i am missing and why there are SO MUCH wired memory on 1GB machine > without X11 or virtualbox > > [vmstat output snipped] > I have been struggling to answer the same question for about a week on our embedded systems (running 8.2). We have systems with 64MB ram which have 20MB wired, and I couldn't find any way to directly view what that wired memory is being used for. I also discovered that the vmstat output accounted for only a tiny fraction of the 20MB. What I eventually determined is that there is some sort of correlation between vfs buffer space and wired memory. Our embedded systems typically do very little disk IO, but during some testing we were spewing debug output to /var/log/messages at the rate of several lines per second for hours. Under these conditions the amount of wired memory would climb from its usual of about 8MB to around 20MB, and once it climbed that high it pretty much never went down, or only went down a couple MB. The resulting memory pressure caused our apps to get killed over and over again with "out of swap space" (we have no swap on these systems). The kernel auto-tunes the vfs buffer space using the formula "for the first 64 MB of ram use 1/4 for buffers, plus 1/10 of the ram over 64 MB." Using 16 of 64 MB of ram for buffer space seems insane to me, but maybe it makes sense on certain types of servers or something. I added "option NBUF=128" to our kernel config and that dropped the buffer space to under 2 MB and since doing that I haven't seen the amount of wired memory ever go above 8 MB. I wonder whether my tuning of NBUF is affecting wired memory usage by indirectly tuning the 'nswbuf' value; I can't tune nswbuf directly because the embedded system is ARM-based and we have no loader(8) for setting tunablables. I'm not sure NBUF=128 is a good setting even for a system that doesn't do much IO, so I consider it experimental and we're testing under a variety of conditions to see if it leads to any unexpected behaviors. I'm certainly not suggesting anyone else rush to add this option to their kernel config. I am VERY curious about the nature of this correlation between vfs buffer space and wired memory. For the VM gurus: Is the behavior I'm seeing expected? Why would memory become wired and seemingly never get released back to one of the page queues after the IO is done? -- Ian