Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 27 Feb 2010 21:32:39 +0100
From:      =?iso-8859-1?Q?Eirik_=D8verby?= <ltning@anduin.net>
To:        Jeremy Chadwick <freebsd@jdc.parodius.com>
Cc:        freebsd-fs@freebsd.org, stable@freebsd.org, Willem Jan Withagen <wjw@digiware.nl>
Subject:   Re: mbuf leakage with nfs/zfs?
Message-ID:  <BD8AC9F6-DF96-41F9-8E92-48A4E5606DC7@anduin.net>
In-Reply-To: <20100227193819.GA60576@icarus.home.lan>
References:  <20100226141754.86ae5a3f.gerrit@pmp.uni-hannover.de> <E1Nl1mb-0002Mx-M9@kabab.cs.huji.ac.il> <E1Nl2JK-00033U-Fw@kabab.cs.huji.ac.il> <20100226174021.8feadad9.gerrit@pmp.uni-hannover.de> <E1Nl6VA-000557-D9@kabab.cs.huji.ac.il> <20100226224320.8c4259bf.gerrit@pmp.uni-hannover.de> <4B884757.9040001@digiware.nl> <20100227080220.ac6a2e4d.gerrit@pmp.uni-hannover.de> <4B892918.4080701@digiware.nl> <20100227202105.f31cbef7.gerrit@pmp.uni-hannover.de> <20100227193819.GA60576@icarus.home.lan>

next in thread | previous in thread | raw e-mail | index | archive | help
On 27. feb. 2010, at 20.38, Jeremy Chadwick wrote:

> On Sat, Feb 27, 2010 at 08:21:05PM +0100, Gerrit K=FChn wrote:
>> On Sat, 27 Feb 2010 15:15:52 +0100 Willem Jan Withagen =
<wjw@digiware.nl>
>> wrote about Re: mbuf leakage with nfs/zfs?:
>>=20
>> WJW> > 81492/2613/84105 mbufs in use (current/cache/total)
>> WJW> > 80467/2235/82702/128000 mbuf clusters in use
>> WJW> > (current/cache/total/max) 80458/822 mbuf+clusters out of =
packet
>> WJW> > secondary zone in use (current/cache)
>>=20
>> WJW> Over the night I only had rsync and FreeBSD nfs traffic.
>> WJW>=20
>> WJW> 45337/2828/48165 mbufs in use (current/cache/total)
>> WJW> 44708/1902/46610/262144 mbuf clusters in use =
(current/cache/total/max)
>> WJW> 44040/888 mbuf+clusters out of packet secondary zone in use
>> WJW> (current/cache)
>>=20
>> After about 24h I now have
>>=20
>> 128320/2630/130950 mbufs in use (current/cache/total)
>> 127294/1200/128494/512000 mbuf clusters in use =
(current/cache/total/max)
>> 127294/834 mbuf+clusters out of packet secondary zone in use =
(current/cache)
>=20
> Follow-up regarding my server statistics shown here:
>=20
> =
http://lists.freebsd.org/pipermail/freebsd-stable/2010-February/055458.htm=
l
>=20
> I just pulled the statistics on the same servers for comparison (then
> vs. now).
>=20
> RELENG_7 amd64 2010/01/09 -- primary HTTP, pri DNS, SSH server + ZFS
>=20
> 	515/1930/2445 mbufs in use (current/cache/total)
> 	512/540/1052/25600 mbuf clusters in use =
(current/cache/total/max)
> 	1152K/6394K/7547K bytes allocated to network =
(current/cache/total)
>=20
> RELENG_7 amd64 2010/01/11 -- secondary DNS, MySQL, dev box + ZFS
>=20
> 	514/1151/1665 mbufs in use (current/cache/total)
> 	512/504/1016/25600 mbuf clusters in use =
(current/cache/total/max)
> 	1152K/2203K/3356K bytes allocated to network =
(current/cache/total)
>=20
> RELENG_7 i386 2008/04/19 -- secondary HTTP, SSH server, heavy memory =
I/O
>=20
> 	515/820/1335 mbufs in use (current/cache/total)
> 	513/631/1144/25600 mbuf clusters in use =
(current/cache/total/max)
> 	1154K/2615K/3769K bytes allocated to network =
(current/cache/total)
>=20
> RELENG_8 amd64 2010/02/02 -- central backups + NFS+ZFS-based filer
>=20
> 	1572/3423/4995 mbufs in use (current/cache/total)
> 	1539/3089/4628/25600 mbuf clusters in use =
(current/cache/total/max)
> 	3471K/7449K/10920K bytes allocated to network =
(current/cache/total)
>=20
> So, not much difference.
>=20
> I should point out that the NFS+ZFS-based filer doesn't actually do =
its
> backups using NFS; it uses rsnapshot (rsync) over SSH.  There is =
intense
> network I/O during backup time though, depending on how much data =
there
> is to back up.  The NFS mounts (on the clients) are only used to =
provide
> a way for people to get access to their nightly backups in a =
convenient
> way; it isn't used very heavily.
>=20
> I can do something NFS-intensive on any of the above clients if people
> want me to kind of testing.  Possibly an rsync with a source of the =
NFS
> mount and a destination of the local disk would be a good test?  Let =
me
> know if anyone's interested in me testing that.

I've had a discussion with some folks on this for a while. I can easily =
reproduce this situation by mounting a FreeBSD ZFS filesystem via =
NFS-UDP from an OpenBSD machine. Telling the OpenBSD machine to use TCP =
instead of UDP makes the problem go away.

Other FreeBSD systems mounting the same share, either using UDP or TCP, =
does not cause the problem to show up.

A patch was suggested by Rick Macklem, but that did not solve the issue:
=
http://lists.freebsd.org/pipermail/freebsd-current/2009-December/014181.ht=
ml

/Eirik



> --=20
> | Jeremy Chadwick                                   jdc@parodius.com |
> | Parodius Networking                       http://www.parodius.com/ |
> | UNIX Systems Administrator                  Mountain View, CA, USA |
> | Making life hard for others since 1977.              PGP: 4BD6C0CB |
>=20
> _______________________________________________
> freebsd-stable@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to =
"freebsd-stable-unsubscribe@freebsd.org"
>=20




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?BD8AC9F6-DF96-41F9-8E92-48A4E5606DC7>