From owner-freebsd-fs@FreeBSD.ORG Sun Sep 12 15:28:10 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B2326106566C for ; Sun, 12 Sep 2010 15:28:10 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 6875F8FC1F for ; Sun, 12 Sep 2010 15:28:10 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: ApwEACKOjEyDaFvO/2dsb2JhbACDGZ8frB2QZIEigyp0BIon X-IronPort-AV: E=Sophos;i="4.56,355,1280721600"; d="scan'208";a="93596562" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-jnhn-pri.mail.uoguelph.ca with ESMTP; 12 Sep 2010 11:28:08 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id AEA23B3F37; Sun, 12 Sep 2010 11:28:08 -0400 (EDT) Date: Sun, 12 Sep 2010 11:28:08 -0400 (EDT) From: Rick Macklem To: Terry Kennedy Message-ID: <954605288.782335.1284305288639.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <01NRSE7GZJEC0022AD@tmk.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [24.65.230.102] X-Mailer: Zimbra 6.0.7_GA_2476.RHEL4 (ZimbraWebClient - SAF3 (Mac)/6.0.7_GA_2473.RHEL4_64) Cc: freebsd-fs@freebsd.org Subject: Re: Weird Linux - FreeBSD/ZFS NFSv4 interoperability problem X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 12 Sep 2010 15:28:10 -0000 > > A couple of people have reported very slow read rates for the NFSv4 > > client (actually the experimental client, since they see it for > > NFSv3 too). If you could easily do the following, using a FreeBSD8.1 > > or newer client: > > # mount -t nfs -o nfsv4 :/path > > - cd to anywhere in the mount that has a 100Mbyte+ file > > # dd if=<100Mbyte+ file> of=/dev/null bs=1m > > > > and then report what read rate you see along with the client's > > machine-arch/# of cores/ram size/network driver used by the mount > > > > rick > > ps: Btw, anyone else who can do this test, it would be appreciated. > > If you aren't set up for NFSv4, you can do an NFSv3 mount using > > the exp. client instead. > > # mount -t newnfs -o nfsv3 :/path > > On 8-STABLE (both client and server). First test is NFSv3 on the > standard > client: > > (0:842) new-gate:~terry# mount -t nfs -o nfsv4 new-rz1:/data /foo > [tcp6] new-rz1:/data: NFSPROC_NULL: RPC: Program/version mismatch; low > version = 2, high version = 3 > [tcp] new-rz1:/data: NFSPROC_NULL: RPC: Program/version mismatch; low > version = 2, high version = 3 > > ^C > (1:843) new-gate:~terry# mount -t nfs -o nfsv3 new-rz1:/data /foo > [...] > (0:869) new-gate:/foo/Backups/Suzanne VAIO# dd if=0cff3d7b_VOL.spf > of=/dev/null bs=1m > 6010+1 records in > 6010+1 records out > 6301945344 bytes transferred in 69.730064 secs (90376302 bytes/sec) > > Now, let's try the newnfs client (cache should have been primed by the > first run, so we'd expect this to be faster): > Just thought I'd mention that, since it is a different mount, the caches won't be primed, which is good, because that would mask differences. > (0:879) new-gate:/tmp# umount /foo > (0:880) new-gate:/tmp# mount -t newnfs -o nfsv3 new-rz1:/data /foo > (0:881) new-gate:/tmp# cd /foo/Backups/Suzanne\ VAIO/ > (0:882) new-gate:/foo/Backups/Suzanne VAIO# dd if=0cff3d7b_VOL.spf > of=/dev/null bs=1m > 6010+1 records in > 6010+1 records out > 6301945344 bytes transferred in 135.927222 secs (46362644 bytes/sec) > > Hmmm. Half the performance. The problem isn't the disk speed on the > server: > Ok, good. You aren't seeing what the two guys reported (they were really slow, at less than 2Mbytes/sec). If you would like to, you could try the following, since the two clients use different default r/w sizes. # mount -t newnfs -o nfsv3,rsize=32768,wsize=32768 new-rz1:/data /foo and see how it changes the read rate. I don't know why there is a factor of 2 difference (if it isn't the different r/w size), but it will probably get resolved as I bring the experimental client up to date. Thanks a lot for doing the test and giving me a data point, rick