Date: Sat, 27 Feb 2010 11:38:19 -0800 From: Jeremy Chadwick <freebsd@jdc.parodius.com> To: Gerrit =?iso-8859-1?Q?K=FChn?= <gerrit@pmp.uni-hannover.de> Cc: freebsd-fs@freebsd.org, stable@freebsd.org, Willem Jan Withagen <wjw@digiware.nl> Subject: Re: mbuf leakage with nfs/zfs? Message-ID: <20100227193819.GA60576@icarus.home.lan> In-Reply-To: <20100227202105.f31cbef7.gerrit@pmp.uni-hannover.de> References: <20100226141754.86ae5a3f.gerrit@pmp.uni-hannover.de> <E1Nl1mb-0002Mx-M9@kabab.cs.huji.ac.il> <E1Nl2JK-00033U-Fw@kabab.cs.huji.ac.il> <20100226174021.8feadad9.gerrit@pmp.uni-hannover.de> <E1Nl6VA-000557-D9@kabab.cs.huji.ac.il> <20100226224320.8c4259bf.gerrit@pmp.uni-hannover.de> <4B884757.9040001@digiware.nl> <20100227080220.ac6a2e4d.gerrit@pmp.uni-hannover.de> <4B892918.4080701@digiware.nl> <20100227202105.f31cbef7.gerrit@pmp.uni-hannover.de>
next in thread | previous in thread | raw e-mail | index | archive | help
On Sat, Feb 27, 2010 at 08:21:05PM +0100, Gerrit Kühn wrote: > On Sat, 27 Feb 2010 15:15:52 +0100 Willem Jan Withagen <wjw@digiware.nl> > wrote about Re: mbuf leakage with nfs/zfs?: > > WJW> > 81492/2613/84105 mbufs in use (current/cache/total) > WJW> > 80467/2235/82702/128000 mbuf clusters in use > WJW> > (current/cache/total/max) 80458/822 mbuf+clusters out of packet > WJW> > secondary zone in use (current/cache) > > WJW> Over the night I only had rsync and FreeBSD nfs traffic. > WJW> > WJW> 45337/2828/48165 mbufs in use (current/cache/total) > WJW> 44708/1902/46610/262144 mbuf clusters in use (current/cache/total/max) > WJW> 44040/888 mbuf+clusters out of packet secondary zone in use > WJW> (current/cache) > > After about 24h I now have > > 128320/2630/130950 mbufs in use (current/cache/total) > 127294/1200/128494/512000 mbuf clusters in use (current/cache/total/max) > 127294/834 mbuf+clusters out of packet secondary zone in use (current/cache) Follow-up regarding my server statistics shown here: http://lists.freebsd.org/pipermail/freebsd-stable/2010-February/055458.html I just pulled the statistics on the same servers for comparison (then vs. now). RELENG_7 amd64 2010/01/09 -- primary HTTP, pri DNS, SSH server + ZFS 515/1930/2445 mbufs in use (current/cache/total) 512/540/1052/25600 mbuf clusters in use (current/cache/total/max) 1152K/6394K/7547K bytes allocated to network (current/cache/total) RELENG_7 amd64 2010/01/11 -- secondary DNS, MySQL, dev box + ZFS 514/1151/1665 mbufs in use (current/cache/total) 512/504/1016/25600 mbuf clusters in use (current/cache/total/max) 1152K/2203K/3356K bytes allocated to network (current/cache/total) RELENG_7 i386 2008/04/19 -- secondary HTTP, SSH server, heavy memory I/O 515/820/1335 mbufs in use (current/cache/total) 513/631/1144/25600 mbuf clusters in use (current/cache/total/max) 1154K/2615K/3769K bytes allocated to network (current/cache/total) RELENG_8 amd64 2010/02/02 -- central backups + NFS+ZFS-based filer 1572/3423/4995 mbufs in use (current/cache/total) 1539/3089/4628/25600 mbuf clusters in use (current/cache/total/max) 3471K/7449K/10920K bytes allocated to network (current/cache/total) So, not much difference. I should point out that the NFS+ZFS-based filer doesn't actually do its backups using NFS; it uses rsnapshot (rsync) over SSH. There is intense network I/O during backup time though, depending on how much data there is to back up. The NFS mounts (on the clients) are only used to provide a way for people to get access to their nightly backups in a convenient way; it isn't used very heavily. I can do something NFS-intensive on any of the above clients if people want me to kind of testing. Possibly an rsync with a source of the NFS mount and a destination of the local disk would be a good test? Let me know if anyone's interested in me testing that. -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP: 4BD6C0CB |
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20100227193819.GA60576>