Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 7 Mar 2012 11:31:09 +0200
From:      Konstantin Belousov <kostikbel@gmail.com>
To:        Luke Marsden <luke@hybrid-logic.co.uk>
Cc:        freebsd-fs@freebsd.org, Ian Lepore <freebsd@damnhippie.dyndns.org>, team@hybrid-logic.co.uk
Subject:   Re: FreeBSD 8.2 - active plus inactive memory leak!?
Message-ID:  <20120307093109.GF75778@deviant.kiev.zoral.com.ua>
In-Reply-To: <1331112366.2589.51.camel@pow>
References:  <1331061203.2218.38.camel@pow> <4F569DFF.8040807@mac.com> <1331080581.2589.28.camel@pow> <20120307082338.GD75778@deviant.kiev.zoral.com.ua> <1331112366.2589.51.camel@pow>

index | next in thread | previous in thread | raw e-mail

[-- Attachment #1 --]
On Wed, Mar 07, 2012 at 09:26:06AM +0000, Luke Marsden wrote:
> On Wed, 2012-03-07 at 10:23 +0200, Konstantin Belousov wrote:
> > On Wed, Mar 07, 2012 at 12:36:21AM +0000, Luke Marsden wrote:
> > > I'm trying to confirm that, on a system with no pages swapped out, that
> > > the following is a true statement:
> > > 
> > >         a page is accounted for in active + inactive if and only if it
> > >         corresponds to one or more of the pages accounted for in the
> > >         resident memory lists of all the processes on the system (as per
> > >         the output of 'top' and 'ps')
> > No.
> > 
> > The pages belonging to vnode vm object can be active or inactive or cached
> > but not mapped into any process address space.
> 
> Thank you, Konstantin.  Does the number of vnodes we've got open on this
> machine (272011) fully explain away the memory gap?
> 
>         Memory gap:
>         11264M active + 2598M inactive - 9297M sum-of-resident = 4565M
>         
>         Active vnodes:
>         vfs.numvnodes: 272011
> 
> That gives a lower bound at 17.18Kb per vode (or higher if we take into
> account shared libs, etc); that seems a bit high for a vnode vm object
> doesn't it?
Vnode vm object keeps the set of pages belonging to the vnode. There is
nothing bad (or good) there.

> 
> If that doesn't fully explain it, what else might be chewing through
> active memory?
> 
> Also, when are vnodes freed?
> 
> This system does have some tuning...
> kern.maxfiles: 1000000
> vm.pmap.pv_entry_max: 73296250
> 
> Could that be contributing to so much active + inactive memory (5GB+
> more than expected), or do PV entries live in wired e.g. kernel memory?
pv entries are accounted as wired memory.

> 
> 
> On Tue, 2012-03-06 at 17:48 -0700, Ian Lepore wrote:
> > In my experience, the bulk of the memory in the inactive category is
> > cached disk blocks, at least for ufs (I think zfs does things
> > differently).  On this desktop machine I have 12G physical and
> > typically have roughly 11G inactive, and I can unmount one particular
> > filesystem where most of my work is done and instantly I have almost
> > no inactive and roughly 11G free.
> 
> Okay, so this could be UFS disk cache, except the system is ZFS-on-root
> with no UFS filesystems active or mounted.  Can I confirm that no
> double-caching of ZFS data is happening in active + inactive (+ cache)
> memory?

ZFS double-buffers the mmaped files.

[-- Attachment #2 --]
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (FreeBSD)

iEYEARECAAYFAk9XKt0ACgkQC3+MBN1Mb4jA/ACg86XbRffmpRAUBECh0y9DGiz5
GNgAoLXNzE8YTJ/lX70JieLwe0CDm9UQ
=2dBb
-----END PGP SIGNATURE-----
home | help

Want to link to this message? Use this
URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20120307093109.GF75778>