Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 09 Jun 2012 10:27:03 -0600
From:      Ian Lepore <freebsd@damnhippie.dyndns.org>
To:        Wojciech Puchar <wojtek@wojtek.tensor.gdynia.pl>
Cc:        freebsd-hackers@freebsd.org
Subject:   Re: wired memory - again!
Message-ID:  <1339259223.36051.328.camel@revolution.hippie.lan>
In-Reply-To: <alpine.BSF.2.00.1206090920030.84632@wojtek.tensor.gdynia.pl>
References:  <alpine.BSF.2.00.1206090920030.84632@wojtek.tensor.gdynia.pl>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sat, 2012-06-09 at 09:21 +0200, Wojciech Puchar wrote:
> top reports wired memory 128MB
> 
> 
> WHERE it is used? below results of vmstat -m and vmstat -z
> values does not sum up even to half of it
> FreeBSD 9 - few days old.
> 
> What i am missing and why there are SO MUCH wired memory on 1GB machine 
> without X11 or virtualbox
> 
>  [vmstat output snipped]
> 


I have been struggling to answer the same question for about a week on
our embedded systems (running 8.2).  We have systems with 64MB ram which
have 20MB wired, and I couldn't find any way to directly view what that
wired memory is being used for.  I also discovered that the vmstat
output accounted for only a tiny fraction of the 20MB.

What I eventually determined is that there is some sort of correlation
between vfs buffer space and wired memory.  Our embedded systems
typically do very little disk IO, but during some testing we were
spewing debug output to /var/log/messages at the rate of several lines
per second for hours.  Under these conditions the amount of wired memory
would climb from its usual of about 8MB to around 20MB, and once it
climbed that high it pretty much never went down, or only went down a
couple MB.  The resulting memory pressure caused our apps to get killed
over and over again with "out of swap space" (we have no swap on these
systems).

The kernel auto-tunes the vfs buffer space using the formula "for the
first 64 MB of ram use 1/4 for buffers, plus 1/10 of the ram over 64
MB."  Using 16 of 64 MB of ram for buffer space seems insane to me, but
maybe it makes sense on certain types of servers or something.  I added
"option NBUF=128" to our kernel config and that dropped the buffer space
to under 2 MB and since doing that I haven't seen the amount of wired
memory ever go above 8 MB.  I wonder whether my tuning of NBUF is
affecting wired memory usage by indirectly tuning the 'nswbuf' value; I
can't tune nswbuf directly because the embedded system is ARM-based and
we have no loader(8) for setting tunablables.

I'm not sure NBUF=128 is a good setting even for a system that doesn't
do much IO, so I consider it experimental and we're testing under a
variety of conditions to see if it leads to any unexpected behaviors.
I'm certainly not suggesting anyone else rush to add this option to
their kernel config.

I am VERY curious about the nature of this correlation between vfs
buffer space and wired memory.  For the VM gurus:  Is the behavior I'm
seeing expected?   Why would memory become wired and seemingly never get
released back to one of the page queues after the IO is done?

-- Ian





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1339259223.36051.328.camel>