Date: Fri, 13 Jul 2007 00:10:48 +0100 From: "Joao Barros" <joao.barros@gmail.com> To: "Pawel Jakub Dawidek" <pjd@freebsd.org> Cc: current@freebsd.org Subject: Re: ZFS leaking vnodes (sort of) Message-ID: <70e8236f0707121610v65bacaa0pcaf45c62516ab424@mail.gmail.com> In-Reply-To: <20070709000918.GD1208@garage.freebsd.pl> References: <200707071426.18202.dfr@rabson.org> <20070709000918.GD1208@garage.freebsd.pl>
next in thread | previous in thread | raw e-mail | index | archive | help
On 7/9/07, Pawel Jakub Dawidek <pjd@freebsd.org> wrote: > On Sat, Jul 07, 2007 at 02:26:17PM +0100, Doug Rabson wrote: > > I've been testing ZFS recently and I noticed some performance issues > > while doing large-scale port builds on a ZFS mounted /usr/ports tree. > > Eventually I realised that virtually nothing ever ended up on the vnode > > free list. This meant that when the system reached its maximum vnode > > limit, it had to resort to reclaiming vnodes from the various > > filesystem's active vnode lists (via vlrureclaim). Since those lists > > are not sorted in LRU order, this led to pessimal cache performance > > after the system got into that state. > > > > I looked a bit closer at the ZFS code and poked around with DDB and I > > think the problem was caused by a couple of extraneous calls to vhold > > when creating a new ZFS vnode. On FreeBSD, getnewvnode returns a vnode > > which is already held (not on the free list) so there is no need to > > call vhold again. > > Whoa! Nice catch... The patch works here - I did some pretty heavy > tests, so please commit it ASAP. > > I also wonder if this can help with some of those 'kmem_map too small' > panics. I was observing that ARC cannot reclaim memory and this may be > because all vnodes and thus associated data are beeing held. > > To ZFS users having problems with performance and/or stability of ZFS: > Can you test the patch and see if it helps? > I've recompiled my system after Doug committed this patch 3 days ago and I can panic my machine as soon as I don't set kern.maxvnodes to 50000 while doing a ls -R after a recursive chown on some thousands of files and dirs: panic: kmem_malloc(16384): kmem_map too small: 326066176 total allocated I noticed that before this patch the system panicked very easily and soon in the chown process. Now it completes the chown on the thousands of files and dirs I have in it and only panics after in the ls -R process. It's an improvement, but something else is still there... -- Joao Barros
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?70e8236f0707121610v65bacaa0pcaf45c62516ab424>