Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 07 Mar 2012 09:53:08 +0000
From:      Luke Marsden <luke@hybrid-logic.co.uk>
To:        Konstantin Belousov <kostikbel@gmail.com>
Cc:        freebsd-fs@freebsd.org, team@hybrid-logic.co.uk
Subject:   Re: FreeBSD 8.2 - active plus inactive memory leak!?
Message-ID:  <1331113988.2589.64.camel@pow>
In-Reply-To: <20120307093109.GF75778@deviant.kiev.zoral.com.ua>
References:  <1331061203.2218.38.camel@pow> <4F569DFF.8040807@mac.com> <1331080581.2589.28.camel@pow> <20120307082338.GD75778@deviant.kiev.zoral.com.ua> <1331112366.2589.51.camel@pow> <20120307093109.GF75778@deviant.kiev.zoral.com.ua>

next in thread | previous in thread | raw e-mail | index | archive | help
On Wed, 2012-03-07 at 11:31 +0200, Konstantin Belousov wrote:
> > > 
> > > The pages belonging to vnode vm object can be active or inactive or cached
> > > but not mapped into any process address space.
> > 
> > Thank you, Konstantin.  Does the number of vnodes we've got open on this
> > machine (272011) fully explain away the memory gap?
> > 
> >         Memory gap:
> >         11264M active + 2598M inactive - 9297M sum-of-resident = 4565M
> >         
> >         Active vnodes:
> >         vfs.numvnodes: 272011
> > 
> > That gives a lower bound at 17.18Kb per vode (or higher if we take into
> > account shared libs, etc); that seems a bit high for a vnode vm object
> > doesn't it?
> Vnode vm object keeps the set of pages belonging to the vnode. There is
> nothing bad (or good) there.

Thanks.  My question is, as an estimate, how large should I expect these
vnode objects to be, in terms of the active + inactive memory they
consume?

I'm trying to explain 5GB+ of memory which has "gone missing" on this
system.  Active memory usage is currently at 13G (and inactive at 1G)
even though only the sum of the resident memory sizes in the output of
'ps' comes only to 8557MB.

Can 5779M of memory be explained by 272011 vnodes entries?

> > Okay, so this could be UFS disk cache, except the system is ZFS-on-root
> > with no UFS filesystems active or mounted.  Can I confirm that no
> > double-caching of ZFS data is happening in active + inactive (+ cache)
> > memory?
> 
> ZFS double-buffers the mmaped files.

The only mmap on this system, to my knowledge, is done in Apache's
scoreboard, which is relatively small and doesn't explain the 5G
discrepancy.

Thanks,
Luke

-- 
CTO, Hybrid Logic
+447791750420  |  +1-415-449-1165  | www.hybrid-cluster.com 




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1331113988.2589.64.camel>