From owner-freebsd-stable@FreeBSD.ORG Tue May 7 23:16:36 2013 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id D75CB7E4 for ; Tue, 7 May 2013 23:16:36 +0000 (UTC) (envelope-from jdc@koitsu.org) Received: from qmta06.emeryville.ca.mail.comcast.net (qmta06.emeryville.ca.mail.comcast.net [IPv6:2001:558:fe2d:43:76:96:30:56]) by mx1.freebsd.org (Postfix) with ESMTP id BBAD485D for ; Tue, 7 May 2013 23:16:36 +0000 (UTC) Received: from omta20.emeryville.ca.mail.comcast.net ([76.96.30.87]) by qmta06.emeryville.ca.mail.comcast.net with comcast id ZAuZ1l0051smiN4A6BGcju; Tue, 07 May 2013 23:16:36 +0000 Received: from koitsu.strangled.net ([67.180.84.87]) by omta20.emeryville.ca.mail.comcast.net with comcast id ZBEa1l00b1t3BNj8gBEbJV; Tue, 07 May 2013 23:14:35 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id A772F73A33; Tue, 7 May 2013 16:14:34 -0700 (PDT) Date: Tue, 7 May 2013 16:14:34 -0700 From: Jeremy Chadwick To: Dewayne Geraghty Subject: Re: Nullfs leaks i-nodes Message-ID: <20130507231434.GA47954@icarus.home.lan> References: <20130507204149.GA3267@gmail.com> <56EF269F84824D8DB413D289BB8CBE19@as.lan> MIME-Version: 1.0 Content-Type: text/plain; charset=unknown-8bit Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <56EF269F84824D8DB413D289BB8CBE19@as.lan> User-Agent: Mutt/1.5.21 (2010-09-15) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=comcast.net; s=q20121106; t=1367968596; bh=WV+2rjtcH02LrSZPGEhZmkm9EfsDGqIkEAcXTu+oOnQ=; h=Received:Received:Received:Date:From:To:Subject:Message-ID: MIME-Version:Content-Type; b=JX6XDxYNGV8pSG3N23TKeHdVvyTEvM/bBR1F9UlJzDTgcL8XlM/Yiw9HbfH2BDhiW JfwD/ma3jHazGQJIKsPxUCRPuMSRGPGxXOW/7kKhXCkpHT7jH5pGUeDbFGjyejrzSh JWKoxxgIPmA57Ys6oE15ReH1IsIeI2Ry2ia76FdVE2escxliXJK91SDjbJHhw1bTxG cKcLq/qAPBaciQfy9hA8B5x60xFh6QhOekEMaLnIBQ5Eba+turWQNliAnegoVR8V+f n/9nEDwObWrPLUOFMMTl+nnq2C2s3V6SZUk3HzpqPkhZrbUcbolB5eSLwMmaHeOpGt 0UP3W1p0VdPRg== Cc: 'Mikolaj Golub' , freebsd-stable@freebsd.org, 'Kostik Belousov' X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 07 May 2013 23:16:36 -0000 On Wed, May 08, 2013 at 08:35:18AM +1000, Dewayne Geraghty wrote: > > -----Original Message----- > > From: owner-freebsd-stable@freebsd.org > > [mailto:owner-freebsd-stable@freebsd.org] On Behalf Of Mikolaj Golub > > Sent: Wednesday, 8 May 2013 6:42 AM > > To: Göran Löwkrantz > > Cc: Kostik Belousov; freebsd-stable@freebsd.org > > Subject: Re: Nullfs leaks i-nodes > > > > On Tue, May 07, 2013 at 08:30:06AM +0200, Göran Löwkrantz wrote: > > > I created a PR, kern/178238, on this but would like to know > > if anyone has > > > any ideas or patches? > > > > > > Have updated the system where I see this to FreeBSD > > 9.1-STABLE #0 r250229 > > > and still have the problem. > > > > I am observing an effect that might look like inode leak, which I > > think is due free nullfs vnodes caching, recently added by kib > > (r240285): free inode number does not increase after unlink; but if I > > purge the free vnodes cache (temporary setting vfs.wantfreevnodes to 0 > > and observing vfs.freevnodes decreasing to 0) the inode number grows > > back. > > > > You have only about 1000 inodes available on your underlying fs, while > > vfs.wantfreevnodes I think is much higher, resulting in running out of > > i-nodes. > > > > If it is really your case you can disable caching, mounting nullfs > > with nocache (it looks like caching is not important in your case). > > > > -- > > Mikolaj Golub > > _______________________________________________ > > freebsd-stable@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-stable > > To unsubscribe, send any mail to > > "freebsd-stable-unsubscribe@freebsd.org" > > > > Hi Goran, > > After I included Kib's vnode caching patch the performance on my "port builder" machine, decreased significantly. The "port > builder" is one of many jails and nullfs is used extensively. I was starving the system of vnodes. Increasing the kern.maxvnodes, > resulted in better performance than the original system configuration without vnode caching. Thanks Kib :) > > I don't think you'll run out of vnodes as it is self adjusting (that was my concern too) > > I changed kern.maxvnode to approx 3 times what it wanted and tuned for my needs. > Try it and keep an eye on: > sysctl vfs.numvnodes vfs.wantfreevnodes vfs.freevnodes vm.stats.vm.v_vnodepgsout vm.stats.vm.v_vnodepgsin Telling people "keep an eye on these sysctls" is not exactly helpful when there isn't an understanding of what they represent -- it's akin to people using munin or SNMP polling software to "monitoring some MIBs" without actually knowing, truly, deep down inside, what it is they're looking at. (I cannot tell you how often this happens. In fact, most "systems monitoring" softwares/graphs/other crap I see these days tends to suffer from exactly this) The only thing I'm aware of is what's in vnode(9) and what I could find here: http://www.youtube.com/watch?v=SpS7Ajnx9Q8 http://bsd-id.blogspot.com/2007/11/vnode.html All said -- has anyone actually seen vfs.freevnodes hit 0? On some of my systems I've seen it reach "small numbers" (in the 3-digit range), but would later increase (to mid-4-digit range), even after lots of (new/unique, i.e. not previously cached) file I/O. So the "auto-adjusting" nature of this makes it very hard for one to say "keep an eye on these sysctls" when the administrator does not know when he/she should become concerned/considering increasing kern.maxvnodes. Next, maxvnodes - numvnodes != freevnodes, which doesn't make sense to me: $ sysctl kern.maxvnodes vfs.freevnodes vfs.wantfreevnodes vfs.numvnodes kern.maxvnodes: 393216 vfs.freevnodes: 51543 vfs.wantfreevnodes: 51545 vfs.numvnodes: 244625 $ expr 393216 - 244625 148591 And finally, the lack of sysctl description for vfs.wantfreevnodes is quite bothersome: $ sysctl -d kern.maxvnodes vfs.freevnodes vfs.wantfreevnodes vfs.numvnodes kern.maxvnodes: Maximum number of vnodes vfs.freevnodes: Number of vnodes in the free list vfs.wantfreevnodes: vfs.numvnodes: Number of vnodes in existence -- | Jeremy Chadwick jdc@koitsu.org | | UNIX Systems Administrator http://jdc.koitsu.org/ | | Mountain View, CA, US | | Making life hard for others since 1977. PGP 4BD6C0CB |