From owner-freebsd-current@FreeBSD.ORG Mon Jul 9 07:48:52 2007 Return-Path: X-Original-To: current@freebsd.org Delivered-To: freebsd-current@FreeBSD.ORG Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id AA84B16A468; Mon, 9 Jul 2007 07:48:52 +0000 (UTC) (envelope-from dfr@rabson.org) Received: from itchy.rabson.org (mailgate.nlsystems.com [80.177.232.242]) by mx1.freebsd.org (Postfix) with ESMTP id 61C0A13C448; Mon, 9 Jul 2007 07:48:52 +0000 (UTC) (envelope-from dfr@rabson.org) Received: from herring.rabson.org (herring.rabson.org [80.177.232.250]) by itchy.rabson.org (8.13.3/8.13.3) with ESMTP id l697moH7058085; Mon, 9 Jul 2007 08:48:50 +0100 (BST) (envelope-from dfr@rabson.org) From: Doug Rabson To: Pawel Jakub Dawidek Date: Mon, 9 Jul 2007 08:48:49 +0100 User-Agent: KMail/1.9.6 References: <200707071426.18202.dfr@rabson.org> <20070709000918.GD1208@garage.freebsd.pl> (sfid-20070709_01093_48523222) In-Reply-To: <20070709000918.GD1208@garage.freebsd.pl> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-6" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200707090848.50190.dfr@rabson.org> X-Virus-Scanned: ClamAV 0.87.1/3613/Mon Jul 9 02:16:11 2007 on itchy.rabson.org X-Virus-Status: Clean Cc: current@freebsd.org Subject: Re: Re: ZFS leaking vnodes (sort of) X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Jul 2007 07:48:52 -0000 On Monday 09 July 2007, Pawel Jakub Dawidek wrote: > On Sat, Jul 07, 2007 at 02:26:17PM +0100, Doug Rabson wrote: > > I've been testing ZFS recently and I noticed some performance > > issues while doing large-scale port builds on a ZFS mounted > > /usr/ports tree. Eventually I realised that virtually nothing ever > > ended up on the vnode free list. This meant that when the system > > reached its maximum vnode limit, it had to resort to reclaiming > > vnodes from the various filesystem's active vnode lists (via > > vlrureclaim). Since those lists are not sorted in LRU order, this > > led to pessimal cache performance after the system got into that > > state. > > > > I looked a bit closer at the ZFS code and poked around with DDB and > > I think the problem was caused by a couple of extraneous calls to > > vhold when creating a new ZFS vnode. On FreeBSD, getnewvnode > > returns a vnode which is already held (not on the free list) so > > there is no need to call vhold again. > > Whoa! Nice catch... The patch works here - I did some pretty heavy > tests, so please commit it ASAP. > > I also wonder if this can help with some of those 'kmem_map too > small' panics. I was observing that ARC cannot reclaim memory and > this may be because all vnodes and thus associated data are beeing > held. > > To ZFS users having problems with performance and/or stability of > ZFS: Can you test the patch and see if it helps? I think it should help the memory usage problems - for one thing since the vnodes were never hitting the free list, VOP_INACTIVE wasn't being properly called on then after last-close which (I think) is supposed to flush out various things. Not quite sure about that bit. Certainly it reduces the number of active vnodes in the system back down to the wantfreevnodes value.