Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 9 Jul 2007 08:48:49 +0100
From:      Doug Rabson <dfr@rabson.org>
To:        Pawel Jakub Dawidek <pjd@freebsd.org>
Cc:        current@freebsd.org
Subject:   Re: Re: ZFS leaking vnodes (sort of)
Message-ID:  <200707090848.50190.dfr@rabson.org>
In-Reply-To: <20070709000918.GD1208@garage.freebsd.pl>
References:  <200707071426.18202.dfr@rabson.org> <20070709000918.GD1208@garage.freebsd.pl> (sfid-20070709_01093_48523222)

next in thread | previous in thread | raw e-mail | index | archive | help
On Monday 09 July 2007, Pawel Jakub Dawidek wrote:
> On Sat, Jul 07, 2007 at 02:26:17PM +0100, Doug Rabson wrote:
> > I've been testing ZFS recently and I noticed some performance
> > issues while doing large-scale port builds on a ZFS mounted
> > /usr/ports tree. Eventually I realised that virtually nothing ever
> > ended up on the vnode free list. This meant that when the system
> > reached its maximum vnode limit, it had to resort to reclaiming
> > vnodes from the various filesystem's active vnode lists (via
> > vlrureclaim). Since those lists are not sorted in LRU order, this
> > led to pessimal cache performance after the system got into that
> > state.
> >
> > I looked a bit closer at the ZFS code and poked around with DDB and
> > I think the problem was caused by a couple of extraneous calls to
> > vhold when creating a new ZFS vnode. On FreeBSD, getnewvnode
> > returns a vnode which is already held (not on the free list) so
> > there is no need to call vhold again.
>
> Whoa! Nice catch... The patch works here - I did some pretty heavy
> tests, so please commit it ASAP.
>
> I also wonder if this can help with some of those 'kmem_map too
> small' panics. I was observing that ARC cannot reclaim memory and
> this may be because all vnodes and thus associated data are beeing
> held.
>
> To ZFS users having problems with performance and/or stability of
> ZFS: Can you test the patch and see if it helps?

I think it should help the memory usage problems - for one thing since 
the vnodes were never hitting the free list, VOP_INACTIVE wasn't being 
properly called on then after last-close which (I think) is supposed to 
flush out various things. Not quite sure about that bit. Certainly it 
reduces the number of active vnodes in the system back down to the 
wantfreevnodes value.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200707090848.50190.dfr>