From owner-freebsd-hackers@FreeBSD.ORG Fri Sep 19 13:06:41 2003 Return-Path: Delivered-To: freebsd-hackers@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 098F116A4B3 for ; Fri, 19 Sep 2003 13:06:41 -0700 (PDT) Received: from blount.mail.mindspring.net (blount.mail.mindspring.net [207.69.200.226]) by mx1.FreeBSD.org (Postfix) with ESMTP id 2044643FDF for ; Fri, 19 Sep 2003 13:06:39 -0700 (PDT) (envelope-from tlambert2@mindspring.com) Received: from [192.168.167.44] (helo=wamui06.slb.atl.earthlink.net) by blount.mail.mindspring.net with esmtp (Exim 3.33 #1) id 1A0RWS-00078C-00; Fri, 19 Sep 2003 16:06:32 -0400 Message-ID: <27372991.1064001992219.JavaMail.root@wamui06.slb.atl.earthlink.net> Date: Fri, 19 Sep 2003 12:06:31 -0800 (GMT-08:00) From: tlambert2@mindspring.com To: John-Mark Gurney , Richard Sharpe Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Mailer: Earthlink Zoo Mail 1.0 cc: freebsd-hackers@freebsd.org Subject: Re: Throughput problems with NFS between Linux and FreeBSD X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list Reply-To: tlambert2@mindspring.com List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Sep 2003 20:06:41 -0000 John-Mark Gurney wrote: > Richard Sharpe wrote this message on Fri, Sep 19, 2003 at 10:38 -0700: > > The problem seems to be the following code > > > > if (so->so_type == SOCK_STREAM) > > siz = NFS_MAXPACKET + sizeof (u_long); > > else > > siz = NFS_MAXPACKET; > > error = soreserve(so, siz, siz); > > > > in src/sys/nfs/nfs_syscalls.c. > > > > We added a sysctl to allow finer control over what is passed to soreserve. > > > > With the fix in, it goes up to around wire speed when lots of data is in > > the cache. > > What is the fix? You don't say what adjustments to soreserve's parameters > are necessary to improve performance? Have you done testing against other > clients to see how your changes will affect performance on those machines? FWIW: I think he means that they change the value of NFS_MAXPACKET. This would actually make sense: you really want the value to be NFS_MAXPACKET times the number of packets you want to allow, up to the TCP window size... ...unless I'm seriously misunderstanding him. -- Terry