Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 3 Nov 2015 11:04:48 +0200
From:      Konstantin Belousov <kostikbel@gmail.com>
To:        Kirk McKusick <mckusick@mckusick.com>
Cc:        Bruce Evans <brde@optusnet.com.au>, fs@freebsd.org
Subject:   Re: an easy (?) question on namecache sizing
Message-ID:  <20151103090448.GC2257@kib.kiev.ua>
In-Reply-To: <201511030447.tA34lo5O090332@chez.mckusick.com>
References:  <20151102224910.E2203@besplex.bde.org> <201511030447.tA34lo5O090332@chez.mckusick.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, Nov 02, 2015 at 08:47:50PM -0800, Kirk McKusick wrote:
> You seem to be proposing several approaches. One is to make
> wantfreevnodes bigger (half or three-quarters of the maximum).
> Another seems to be reverting to the previous (freevnodes >= wantfreevnodes
> && numvnodes >= minvnodes). So what is your proposed change? 

Free vnodes could be freed in the soft fashion by vnlru daemon, or in
hard manner, by the getnewvnode(), when the max for the vnode count is
reached. The 'soft' way skips vnodes which are directories, to make it
more probable that vn_fullpath() would succeed, and also has threshold
for the count of cached pages. The 'hard' way waits up to 1 sec for
the vnlru daemon to succeed, before forcing a recycle for any vnode,
regardless of the 'soft' stoppers. This causes the ticking behaviour of
the system when only one vnode operation in single thread succeeds in a
second.

Large wantfreevnodes value is the safety measure to prevent the tick
steps in practice. My initial reaction on the complain was to just
suggest to increase desiredvnodes, at least this is what I do on
machines where there is a lot of both KVA and memory and intensive file
loads are expected.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20151103090448.GC2257>