From owner-freebsd-fs@FreeBSD.ORG Sat Feb 27 21:00:11 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 0ACE6106564A; Sat, 27 Feb 2010 21:00:11 +0000 (UTC) (envelope-from ltning@anduin.net) Received: from mail.anduin.net (mail.anduin.net [213.225.74.249]) by mx1.freebsd.org (Postfix) with ESMTP id 715578FC17; Sat, 27 Feb 2010 21:00:10 +0000 (UTC) Received: from [212.62.248.146] (helo=[192.168.2.198]) by mail.anduin.net with esmtpsa (TLSv1:AES128-SHA:128) (Exim 4.69 (FreeBSD)) (envelope-from ) id 1NlTL6-000FX0-Mg; Sat, 27 Feb 2010 21:32:40 +0100 Mime-Version: 1.0 (Apple Message framework v1077) From: =?iso-8859-1?Q?Eirik_=D8verby?= In-Reply-To: <20100227193819.GA60576@icarus.home.lan> Date: Sat, 27 Feb 2010 21:32:39 +0100 Message-Id: References: <20100226141754.86ae5a3f.gerrit@pmp.uni-hannover.de> <20100226174021.8feadad9.gerrit@pmp.uni-hannover.de> <20100226224320.8c4259bf.gerrit@pmp.uni-hannover.de> <4B884757.9040001@digiware.nl> <20100227080220.ac6a2e4d.gerrit@pmp.uni-hannover.de> <4B892918.4080701@digiware.nl> <20100227202105.f31cbef7.gerrit@pmp.uni-hannover.de> <20100227193819.GA60576@icarus.home.lan> To: Jeremy Chadwick X-Mailer: Apple Mail (2.1077) Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org, stable@freebsd.org, Willem Jan Withagen Subject: Re: mbuf leakage with nfs/zfs? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 27 Feb 2010 21:00:11 -0000 On 27. feb. 2010, at 20.38, Jeremy Chadwick wrote: > On Sat, Feb 27, 2010 at 08:21:05PM +0100, Gerrit K=FChn wrote: >> On Sat, 27 Feb 2010 15:15:52 +0100 Willem Jan Withagen = >> wrote about Re: mbuf leakage with nfs/zfs?: >>=20 >> WJW> > 81492/2613/84105 mbufs in use (current/cache/total) >> WJW> > 80467/2235/82702/128000 mbuf clusters in use >> WJW> > (current/cache/total/max) 80458/822 mbuf+clusters out of = packet >> WJW> > secondary zone in use (current/cache) >>=20 >> WJW> Over the night I only had rsync and FreeBSD nfs traffic. >> WJW>=20 >> WJW> 45337/2828/48165 mbufs in use (current/cache/total) >> WJW> 44708/1902/46610/262144 mbuf clusters in use = (current/cache/total/max) >> WJW> 44040/888 mbuf+clusters out of packet secondary zone in use >> WJW> (current/cache) >>=20 >> After about 24h I now have >>=20 >> 128320/2630/130950 mbufs in use (current/cache/total) >> 127294/1200/128494/512000 mbuf clusters in use = (current/cache/total/max) >> 127294/834 mbuf+clusters out of packet secondary zone in use = (current/cache) >=20 > Follow-up regarding my server statistics shown here: >=20 > = http://lists.freebsd.org/pipermail/freebsd-stable/2010-February/055458.htm= l >=20 > I just pulled the statistics on the same servers for comparison (then > vs. now). >=20 > RELENG_7 amd64 2010/01/09 -- primary HTTP, pri DNS, SSH server + ZFS >=20 > 515/1930/2445 mbufs in use (current/cache/total) > 512/540/1052/25600 mbuf clusters in use = (current/cache/total/max) > 1152K/6394K/7547K bytes allocated to network = (current/cache/total) >=20 > RELENG_7 amd64 2010/01/11 -- secondary DNS, MySQL, dev box + ZFS >=20 > 514/1151/1665 mbufs in use (current/cache/total) > 512/504/1016/25600 mbuf clusters in use = (current/cache/total/max) > 1152K/2203K/3356K bytes allocated to network = (current/cache/total) >=20 > RELENG_7 i386 2008/04/19 -- secondary HTTP, SSH server, heavy memory = I/O >=20 > 515/820/1335 mbufs in use (current/cache/total) > 513/631/1144/25600 mbuf clusters in use = (current/cache/total/max) > 1154K/2615K/3769K bytes allocated to network = (current/cache/total) >=20 > RELENG_8 amd64 2010/02/02 -- central backups + NFS+ZFS-based filer >=20 > 1572/3423/4995 mbufs in use (current/cache/total) > 1539/3089/4628/25600 mbuf clusters in use = (current/cache/total/max) > 3471K/7449K/10920K bytes allocated to network = (current/cache/total) >=20 > So, not much difference. >=20 > I should point out that the NFS+ZFS-based filer doesn't actually do = its > backups using NFS; it uses rsnapshot (rsync) over SSH. There is = intense > network I/O during backup time though, depending on how much data = there > is to back up. The NFS mounts (on the clients) are only used to = provide > a way for people to get access to their nightly backups in a = convenient > way; it isn't used very heavily. >=20 > I can do something NFS-intensive on any of the above clients if people > want me to kind of testing. Possibly an rsync with a source of the = NFS > mount and a destination of the local disk would be a good test? Let = me > know if anyone's interested in me testing that. I've had a discussion with some folks on this for a while. I can easily = reproduce this situation by mounting a FreeBSD ZFS filesystem via = NFS-UDP from an OpenBSD machine. Telling the OpenBSD machine to use TCP = instead of UDP makes the problem go away. Other FreeBSD systems mounting the same share, either using UDP or TCP, = does not cause the problem to show up. A patch was suggested by Rick Macklem, but that did not solve the issue: = http://lists.freebsd.org/pipermail/freebsd-current/2009-December/014181.ht= ml /Eirik > --=20 > | Jeremy Chadwick jdc@parodius.com | > | Parodius Networking http://www.parodius.com/ | > | UNIX Systems Administrator Mountain View, CA, USA | > | Making life hard for others since 1977. PGP: 4BD6C0CB | >=20 > _______________________________________________ > freebsd-stable@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to = "freebsd-stable-unsubscribe@freebsd.org" >=20