Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 08 May 2013 09:30:57 +0200
From:      =?UTF-8?Q?G=C3=B6ran_L=C3=B6wkrantz?= <goran.lowkrantz@ismobile.com>
To:        Dewayne Geraghty <dewayne.geraghty@heuristicsystems.com.au>, 'Mikolaj Golub' <trociny@freebsd.org>
Cc:        freebsd-stable@freebsd.org, 'Kostik Belousov' <kib@freebsd.org>
Subject:   RE: Nullfs leaks i-nodes
Message-ID:  <2FBC9C8F12387387C1AEF445@[172.16.2.45]>
In-Reply-To: <56EF269F84824D8DB413D289BB8CBE19@as.lan>
References:  <B799E3B928B18B9E6C68F912@[172.16.2.62]> <20130507204149.GA3267@gmail.com> <56EF269F84824D8DB413D289BB8CBE19@as.lan>

next in thread | previous in thread | raw e-mail | index | archive | help


--On May 8, 2013 8:35:18 +1000 Dewayne Geraghty=20
<dewayne.geraghty@heuristicsystems.com.au> wrote:

>> -----Original Message-----
>> From: owner-freebsd-stable@freebsd.org
>> [mailto:owner-freebsd-stable@freebsd.org] On Behalf Of Mikolaj Golub
>> Sent: Wednesday, 8 May 2013 6:42 AM
>> To: G=C3=B6ran L=C3=B6wkrantz
>> Cc: Kostik Belousov; freebsd-stable@freebsd.org
>> Subject: Re: Nullfs leaks i-nodes
>>
>> On Tue, May 07, 2013 at 08:30:06AM +0200, G=C3=B6ran L=C3=B6wkrantz =
wrote:
>> > I created a PR, kern/178238, on this but would like to know
>> if anyone has
>> > any ideas or patches?
>> >
>> > Have updated the system where I see this to FreeBSD
>> 9.1-STABLE #0 r250229
>> > and still have the problem.
>>
>> I am observing an effect that might look like inode leak, which I
>> think is due free nullfs vnodes caching, recently added by kib
>> (r240285): free inode number does not increase after unlink; but if I
>> purge the free vnodes cache (temporary setting vfs.wantfreevnodes to 0
>> and observing vfs.freevnodes decreasing to 0) the inode number grows
>> back.
>>
>> You have only about 1000 inodes available on your underlying fs, while
>> vfs.wantfreevnodes I think is much higher, resulting in running out of
>> i-nodes.
>>
>> If it is really your case you can disable caching, mounting nullfs
>> with nocache (it looks like caching is not important in your case).
>>
>> --
>> Mikolaj Golub
>> _______________________________________________
>> freebsd-stable@freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
>> To unsubscribe, send any mail to
>> "freebsd-stable-unsubscribe@freebsd.org"
>>
>
> Hi Goran,
>
> After I included Kib's vnode caching patch the performance on my "port
> builder" machine, decreased significantly.  The "port builder" is one of
> many jails and nullfs is used extensively. I was starving the system of
> vnodes.  Increasing the kern.maxvnodes, resulted in better performance
> than the original system configuration without vnode caching. Thanks Kib
> :)
>
> I don't think you'll run out of vnodes as it is self adjusting (that was
> my concern too)
>
> I changed kern.maxvnode to approx 3 times what it wanted and tuned for my
> needs. Try it and keep an eye on:
> sysctl vfs.numvnodes vfs.wantfreevnodes vfs.freevnodes
> vm.stats.vm.v_vnodepgsout vm.stats.vm.v_vnodepgsin
>
> Regards, Dewayne
>
Hi Dewayne,

I got a few of those too but I didn't connect them with the FW problem as=20
here there seems to be reclaim pressure.

On the FW I get these numbers:
vfs.numvnodes: 7500
vfs.wantfreevnodes: 27936
vfs.freevnodes: 5663
vm.stats.vm.v_vnodepgsout: 0
vm.stats.vm.v_vnodepgsin: 4399

while on the jail systems I get something like this:
vfs.numvnodes: 51212
vfs.wantfreevnodes: 35668
vfs.freevnodes: 35665
vm.stats.vm.v_vnodepgsout: 5952
vm.stats.vm.v_vnodepgsin: 939563

and as far as I can understand, the fact that vfs.wantfreevnodes and=20
vfs.freevnodes are almost the same suggests that we have a reclaim =
pressure.

So one fix for small NanoBSD systems would be to lower vfs.wantfreevnodes=20
and I will test that on a virtual machine and see if I can get better=20
reclaim.

MVH
	G=C3=B6ran







Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?2FBC9C8F12387387C1AEF445>