From owner-freebsd-fs@FreeBSD.ORG Sun Sep 12 08:10:10 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E29C31065672 for ; Sun, 12 Sep 2010 08:10:09 +0000 (UTC) (envelope-from TERRY@tmk.com) Received: from server.tmk.com (server.tmk.com [204.141.35.63]) by mx1.freebsd.org (Postfix) with ESMTP id ACCFC8FC18 for ; Sun, 12 Sep 2010 08:10:09 +0000 (UTC) Received: from tmk.com by tmk.com (PMDF V6.4 #37010) id <01NRSD8H77W00022AD@tmk.com> for freebsd-fs@freebsd.org; Sun, 12 Sep 2010 04:10:04 -0400 (EDT) Date: Sun, 12 Sep 2010 04:09:44 -0400 (EDT) From: Terry Kennedy To: freebsd-fs@freebsd.org Message-id: <01NRSE7GZJEC0022AD@tmk.com> MIME-version: 1.0 Content-type: TEXT/PLAIN; CHARSET=us-ascii Subject: Re: Weird Linux - FreeBSD/ZFS NFSv4 interoperability problem X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 12 Sep 2010 08:10:10 -0000 > A couple of people have reported very slow read rates for the NFSv4 > client (actually the experimental client, since they see it for > NFSv3 too). If you could easily do the following, using a FreeBSD8.1 > or newer client: > # mount -t nfs -o nfsv4 :/path > - cd to anywhere in the mount that has a 100Mbyte+ file > # dd if=<100Mbyte+ file> of=/dev/null bs=1m > > and then report what read rate you see along with the client's > machine-arch/# of cores/ram size/network driver used by the mount > > rick > ps: Btw, anyone else who can do this test, it would be appreciated. > If you aren't set up for NFSv4, you can do an NFSv3 mount using > the exp. client instead. > # mount -t newnfs -o nfsv3 :/path On 8-STABLE (both client and server). First test is NFSv3 on the standard client: (0:842) new-gate:~terry# mount -t nfs -o nfsv4 new-rz1:/data /foo [tcp6] new-rz1:/data: NFSPROC_NULL: RPC: Program/version mismatch; low version = 2, high version = 3 [tcp] new-rz1:/data: NFSPROC_NULL: RPC: Program/version mismatch; low version = 2, high version = 3 ^C (1:843) new-gate:~terry# mount -t nfs -o nfsv3 new-rz1:/data /foo [...] (0:869) new-gate:/foo/Backups/Suzanne VAIO# dd if=0cff3d7b_VOL.spf of=/dev/null bs=1m 6010+1 records in 6010+1 records out 6301945344 bytes transferred in 69.730064 secs (90376302 bytes/sec) Now, let's try the newnfs client (cache should have been primed by the first run, so we'd expect this to be faster): (0:879) new-gate:/tmp# umount /foo (0:880) new-gate:/tmp# mount -t newnfs -o nfsv3 new-rz1:/data /foo (0:881) new-gate:/tmp# cd /foo/Backups/Suzanne\ VAIO/ (0:882) new-gate:/foo/Backups/Suzanne VAIO# dd if=0cff3d7b_VOL.spf of=/dev/null bs=1m 6010+1 records in 6010+1 records out 6301945344 bytes transferred in 135.927222 secs (46362644 bytes/sec) Hmmm. Half the performance. The problem isn't the disk speed on the server: (0:19) new-rz1:/data/Backups/Suzanne VAIO# dd if=0cff3d7b_VOL.spf of=/dev/null bs=1m 6010+1 records in 6010+1 records out 6301945344 bytes transferred in 1.307266 secs (4820706236 bytes/sec) Client system (new-gate) specs: CPU: Intel(R) Xeon(R) CPU X5470 @ 3.33GHz (3333.35-MHz K8-class CPU) Origin = "GenuineIntel" Id = 0x1067a Family = 6 Model = 17 Stepping = 10 real memory = 8589934592 (8192 MB) avail memory = 8256380928 (7873 MB) FreeBSD/SMP: Multiprocessor System Detected: 4 CPUs FreeBSD/SMP: 1 package(s) x 4 core(s) bce0: mem 0xdc000000-0xddffffff irq 16 at device 0.0 on pci8 miibus0: on bce0 brgphy0: PHY 1 on miibus0 brgphy0: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, 1000baseT, 1000baseT-FDX, auto (0:878) new-gate:/tmp# ifconfig bce0 bce0: flags=8843 metric 0 mtu 9000 options=c01bb Server system (new-rz1) specs: CPU: Intel(R) Xeon(R) CPU E5520 @ 2.27GHz (2275.83-MHz K8-class CPU) Origin = "GenuineIntel" Id = 0x106a5 Family = 6 Model = 1a Stepping = 5 real memory = 51543801856 (49156 MB) avail memory = 49691684864 (47389 MB) FreeBSD/SMP: Multiprocessor System Detected: 16 CPUs FreeBSD/SMP: 2 package(s) x 4 core(s) x 2 SMT threads igb0: port 0xcf80-0xcf9f mem 0xface0000-0xfacfffff,0xfacc0000-0xfacdffff,0xfac9c000-0xfac9ffff irq 28 at device 0.0 on pci1 igb0: flags=8843 metric 0 mtu 9000 options=1bb Let me know if there's any other testing you'd like me to do. Terry Kennedy http://www.tmk.com terry@tmk.com New York, NY USA