From owner-freebsd-stable@FreeBSD.ORG Sun Aug 29 03:49:34 2010 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 6E3C010656A7 for ; Sun, 29 Aug 2010 03:49:34 +0000 (UTC) (envelope-from rick@rix.kiwi-computer.com) Received: from rix.kiwi-computer.com (66-191-70-202.static.stcd.mn.charter.com [66.191.70.202]) by mx1.freebsd.org (Postfix) with SMTP id E2FFC8FC0A for ; Sun, 29 Aug 2010 03:49:33 +0000 (UTC) Received: (qmail 81785 invoked by uid 2000); 29 Aug 2010 03:22:52 -0000 Date: Sat, 28 Aug 2010 22:22:52 -0500 From: "Rick C. Petty" To: Rick Macklem Message-ID: <20100829032252.GA81736@rix.kiwi-computer.com> References: <20100627221607.GA31646@kay.kiwi-computer.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.3i Cc: freebsd-stable@freebsd.org Subject: Re: Why is NFSv4 so slow? X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: rick-freebsd2009@kiwi-computer.com List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 29 Aug 2010 03:49:34 -0000 Hi. I'm still having problems with NFSv4 being very laggy on one client. When the NFSv4 server is at 50% idle CPU and the disks are < 1% busy, I am getting horrible throughput on an idle client. Using dd(1) with 1 MB block size, when I try to read a > 100 MB file from the client, I'm getting around 300-500 KiB/s. On another client, I see upwards of 20 MiB/s with the same test (on a different file). On the broken client: # uname -mv FreeBSD 8.1-STABLE #5 r211534M: Sat Aug 28 15:53:10 CDT 2010 user@example.com:/usr/obj/usr/src/sys/GENERIC i386 # ifconfig re0 re0: flags=8843 metric 0 mtu 1500 options=389b ether 00:e0:4c:xx:yy:zz inet xx.yy.zz.3 netmask 0xffffff00 broadcast xx.yy.zz.255 media: Ethernet autoselect (1000baseT ) status: active # netstat -m 267/768/1035 mbufs in use (current/cache/total) 263/389/652/25600 mbuf clusters in use (current/cache/total/max) 263/377 mbuf+clusters out of packet secondary zone in use (current/cache) 0/20/20/12800 4k (page size) jumbo clusters in use (current/cache/total/max) 0/0/0/6400 9k jumbo clusters in use (current/cache/total/max) 0/0/0/3200 16k jumbo clusters in use (current/cache/total/max) 592K/1050K/1642K bytes allocated to network (current/cache/total) 0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters) 0/0/0 requests for jumbo clusters denied (4k/9k/16k) 0/5/6656 sfbufs in use (current/peak/max) 0 requests for sfbufs denied 0 requests for sfbufs delayed 0 requests for I/O initiated by sendfile 0 calls to protocol drain routines # netstat -idn Name Mtu Network Address Ipkts Ierrs Idrop Opkts Oerrs Coll Drop re0 1500 00:e0:4c:xx:yy:zz 232135 0 0 68984 0 0 0 re0 1500 xx.yy.zz.0/2 xx.yy.zz.3 232127 - - 68979 - - - nfe0* 1500 00:22:15:xx:yy:zz 0 0 0 0 0 0 0 plip0 1500 0 0 0 0 0 0 0 lo0 16384 42 0 0 42 0 0 0 lo0 16384 fe80:4::1/64 fe80:4::1 0 - - 0 - - - lo0 16384 ::1/128 ::1 0 - - 0 - - - lo0 16384 127.0.0.0/8 127.0.0.1 42 - - 42 - - - # sysctl kern.ipc.maxsockbuf kern.ipc.maxsockbuf: 1048576 # sysctl net.inet.tcp.sendbuf_max net.inet.tcp.sendbuf_max: 16777216 # sysctl net.inet.tcp.recvbuf_max net.inet.tcp.recvbuf_max: 16777216 # sysctl net.inet.tcp.sendspace net.inet.tcp.sendspace: 65536 # sysctl net.inet.tcp.recvspace net.inet.tcp.recvspace: 131072 # sysctl hw.pci | grep msi hw.pci.honor_msi_blacklist: 1 hw.pci.enable_msix: 1 hw.pci.enable_msi: 1 # vmstat -i interrupt total rate irq14: ata0 47 0 irq16: re0 219278 191 irq21: ohci0+ 5939 5 irq22: vgapci0+ 77990 67 cpu0: timer 2294451 1998 irq256: hdac0 44069 38 cpu1: timer 2293983 1998 Total 4935757 4299 Any ideas? -- Rick C. Petty