Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 9 Jun 2012 19:52:17 +0300
From:      Konstantin Belousov <kostikbel@gmail.com>
To:        Ian Lepore <freebsd@damnhippie.dyndns.org>
Cc:        Wojciech Puchar <wojtek@wojtek.tensor.gdynia.pl>, freebsd-hackers@freebsd.org
Subject:   Re: wired memory - again!
Message-ID:  <20120609165217.GO85127@deviant.kiev.zoral.com.ua>
In-Reply-To: <1339259223.36051.328.camel@revolution.hippie.lan>
References:  <alpine.BSF.2.00.1206090920030.84632@wojtek.tensor.gdynia.pl> <1339259223.36051.328.camel@revolution.hippie.lan>

next in thread | previous in thread | raw e-mail | index | archive | help

--vrIGRKnBmQs2THAk
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Sat, Jun 09, 2012 at 10:27:03AM -0600, Ian Lepore wrote:
> On Sat, 2012-06-09 at 09:21 +0200, Wojciech Puchar wrote:
> > top reports wired memory 128MB
> >=20
> >=20
> > WHERE it is used? below results of vmstat -m and vmstat -z
> > values does not sum up even to half of it
> > FreeBSD 9 - few days old.
> >=20
> > What i am missing and why there are SO MUCH wired memory on 1GB machine=
=20
> > without X11 or virtualbox
> >=20
> >  [vmstat output snipped]
> >=20
>=20
>=20
> I have been struggling to answer the same question for about a week on
> our embedded systems (running 8.2).  We have systems with 64MB ram which
> have 20MB wired, and I couldn't find any way to directly view what that
> wired memory is being used for.  I also discovered that the vmstat
> output accounted for only a tiny fraction of the 20MB.
>=20
> What I eventually determined is that there is some sort of correlation
> between vfs buffer space and wired memory.  Our embedded systems
> typically do very little disk IO, but during some testing we were
> spewing debug output to /var/log/messages at the rate of several lines
> per second for hours.  Under these conditions the amount of wired memory
> would climb from its usual of about 8MB to around 20MB, and once it
> climbed that high it pretty much never went down, or only went down a
> couple MB.  The resulting memory pressure caused our apps to get killed
> over and over again with "out of swap space" (we have no swap on these
> systems).
>=20
> The kernel auto-tunes the vfs buffer space using the formula "for the
> first 64 MB of ram use 1/4 for buffers, plus 1/10 of the ram over 64
> MB."  Using 16 of 64 MB of ram for buffer space seems insane to me, but
> maybe it makes sense on certain types of servers or something.  I added
> "option NBUF=3D128" to our kernel config and that dropped the buffer space
> to under 2 MB and since doing that I haven't seen the amount of wired
> memory ever go above 8 MB.  I wonder whether my tuning of NBUF is
> affecting wired memory usage by indirectly tuning the 'nswbuf' value; I
> can't tune nswbuf directly because the embedded system is ARM-based and
> we have no loader(8) for setting tunablables.
>=20
> I'm not sure NBUF=3D128 is a good setting even for a system that doesn't
> do much IO, so I consider it experimental and we're testing under a
> variety of conditions to see if it leads to any unexpected behaviors.
> I'm certainly not suggesting anyone else rush to add this option to
> their kernel config.
>=20
> I am VERY curious about the nature of this correlation between vfs
> buffer space and wired memory.  For the VM gurus:  Is the behavior I'm
> seeing expected?   Why would memory become wired and seemingly never get
> released back to one of the page queues after the IO is done?

Hopefully, I can give you some information while you are waiting for
answer from gurus.

First, all memory allocated by UMA and consequently malloc(9) is
wired. In other words, almost all memory used by kernel is accounted
as wired.

Second, the buffer cache wires the pages which are inserted into VMIO
buffers. So your observation is basically right, cached buffers means
that corresponding memory is removed from queues and put into wired
state. When buffers are dissolved, pages are unwired and deactivated.

This behaviour is in fact required by VFS, since you do expect to access
buffer data when you get the buffer.

--vrIGRKnBmQs2THAk
Content-Type: application/pgp-signature
Content-Disposition: inline

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (FreeBSD)

iEYEARECAAYFAk/Tf0EACgkQC3+MBN1Mb4h4pgCeI4rR+N1+QGG3sqtpYWVhIk9T
SVMAoKhwxfkZkeNAcXbgg2XuyltA+dvf
=oxCQ
-----END PGP SIGNATURE-----

--vrIGRKnBmQs2THAk--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20120609165217.GO85127>