From owner-freebsd-performance@FreeBSD.ORG Wed Jun 11 02:44:08 2003 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 954A637B401 for ; Wed, 11 Jun 2003 02:44:08 -0700 (PDT) Received: from heron.mail.pas.earthlink.net (heron.mail.pas.earthlink.net [207.217.120.189]) by mx1.FreeBSD.org (Postfix) with ESMTP id EA2E343FE9 for ; Wed, 11 Jun 2003 02:44:07 -0700 (PDT) (envelope-from tlambert2@mindspring.com) Received: from user-38lc0s4.dialup.mindspring.com ([209.86.3.132] helo=mindspring.com) by heron.mail.pas.earthlink.net with asmtp (SSLv3:RC4-MD5:128) (Exim 3.33 #1) id 19Q29E-0005nv-00; Wed, 11 Jun 2003 02:44:05 -0700 Message-ID: <3EE6F918.1C1FF28E@mindspring.com> Date: Wed, 11 Jun 2003 02:40:40 -0700 From: Terry Lambert X-Mailer: Mozilla 4.79 [en] (Win98; U) X-Accept-Language: en MIME-Version: 1.0 To: Sean Chittenden References: <20030609211526.58641.qmail@web14908.mail.yahoo.com> <3EE4FAED.6090603@centtech.com> <3EE595D2.B223CA19@mindspring.com> <3EE5F8DE.30001@centtech.com> <20030610195632.GQ65470@perrin.int.nxad.com> Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-ELNK-Trace: b1a02af9316fbb217a47c185c03b154d40683398e744b8a46e7e10ac48e93404979dab5c78387674a8438e0f32a48e08350badd9bab72f9c350badd9bab72f9c cc: freebsd-performance@freebsd.org cc: Eric Anderson Subject: Re: Slow disk write speeds over network X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 11 Jun 2003 09:44:08 -0000 Sean Chittenden wrote: > > >...and yet more sysctl's for this: > > > > > > kern.polling.enable=1 > > > kern.polling.user_frac=50 # 0..100; whatever works best > > > > > >If you've got a really terrible Gigabit Ethernet card, then > > >you may be copying all your packets over again (e.g. m_pullup()), > > >and that could be eating your bus, too. > > > > Ok, so the end result is that after playing around with sysctl's, > > I've found that the tcp transfers are doing 20MB/s over FTP, but my > > NFS is around 1-2MB/s - still slow.. So we've cleared up some tcp > > issues, but 2yet still NFS is stinky.. > > > > Any more ideas? > > I'm using UDP NFS over a 100Mbit/FD link with the following settings > and get about 12-14Mbps: Numbers taken in context of original poster... YMMV: > net.inet.tcp.recvspace=65536 This is most important for writes. The sendspace is pretty well not going to help you out, unless you are starvation deadlocked; it didn't look like you were from your previous posting. BTW: I believe this is the default. > net.inet.tcp.sendspace=65536 Double the default. Might not be a good idea, unless you have a ton of memory. You will potentially use 64K send + 64K receive times number of sockets. Assuming 4G and near-perfect tuning, you will be limited to 16384 simultaneous connections fully packed before memory pressure causes your machine to crash. I tend to like smaller buffers and more connections. If you only have 512M, drop this number to 2048 simultaneous connections if all buffers are full. > kern.maxfiles=65536 Seems kind of overkill for the number of connections you can support without overcommit, and the number of client machines you say you have. > kern.ipc.maxsockbuf=2097152 > kern.ipc.somaxconn=8192 IPC numbers; not relevent. > net.inet.tcp.delayed_ack=0 This will make it more responsive, at some cost in overhead. > net.inet.udp.recvspace=65536 > net.inet.udp.maxdgram=57344 These are important for UDP NFS. I do not reccomend it. > net.local.stream.sendspace=65536 > net.local.stream.recvspace=65536 IPC numbers; not relelvent. > vfs.nfs.async=1 This is very dangerous, if you care about your data. It permits NFS to ACK writes before they have been committed to stable storage. With a large enough window size, this should not be necessary. > net.inet.udp.log_in_vain=1 This is just overhead; I reccomend turning it off. > net.inet.icmp.icmplim=20000 This is only useful for TCP; but it can be very useful. Basically, this is "connection rate limiting". If you have a ton of clients, or trying to "netbench" the system, then set this number up. For 100 NFS clients, it likely does not matter. > I'm not taking into account jumbo frames or anything like that, so you > may want to increase the size of some of these values where > appropriate, but some of these may be a start. -sc In my experience, Intel GigE cards do not play nice with others when it comes to jumbo frames or negotiation. I much prefer the Tigon/Alteon/Broadcom/whoever-they-are-this-week-still-no-firmware, though I would obviously like the same firmware access to the Tigon III's as they used to give us to the Tigon II's. -- Terry