From owner-freebsd-net@FreeBSD.ORG Wed Sep 6 05:58:15 2006 Return-Path: X-Original-To: freebsd-net@freebsd.org Delivered-To: freebsd-net@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id A517416A4DA; Wed, 6 Sep 2006 05:58:15 +0000 (UTC) (envelope-from danny@cs.huji.ac.il) Received: from cs1.cs.huji.ac.il (cs1.cs.huji.ac.il [132.65.16.10]) by mx1.FreeBSD.org (Postfix) with ESMTP id 3613943D45; Wed, 6 Sep 2006 05:58:15 +0000 (GMT) (envelope-from danny@cs.huji.ac.il) Received: from pampa.cs.huji.ac.il ([132.65.80.32]) by cs1.cs.huji.ac.il with esmtp id 1GKqQN-000HHF-O2; Wed, 06 Sep 2006 08:58:11 +0300 X-Mailer: exmh version 2.7.2 01/07/2005 with nmh-1.2 To: Thomas Herrlin In-reply-to: Your message of Tue, 05 Sep 2006 18:21:07 +0200 . Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Date: Wed, 06 Sep 2006 08:58:11 +0300 From: Danny Braniss Message-ID: Cc: freebsd-net@freebsd.org, freebsd-stable@freebsd.org Subject: Re: tcp/udp performance X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 06 Sep 2006 05:58:15 -0000 > Jack Vogel wrote: > > On 8/30/06, Danny Braniss wrote: > >> > >> ever since 6.1 I've seen fluctuations in the performance of > >> the em (Intel(R) PRO/1000 Gigabit Ethernet). > >> > >> motherboard OBN (On Board NIC) > >> ---------------- ------------------ > >> 1- Intel SE7501WV2S Intel 82546EB::2.1 > >> 2- Intel SE7320VP2D2 INTEL 82541 > >> 3- Sun Fire X4100 Server Intel(R) PRO/1000 > >> > >> test 1: writing to a NetApp filer via NFS/UDP > >> FreeBSD Linux > >> MegaBytes/sec > >> 1- Average: 18.48 32.61 > >> 2- Average: 15.69 35.72 > >> 3- Average: 16.61 29.69 > >> (interstingly, doing NFS/TCP instead of NFS/UDP shows an increase in > >> speed of > >> around 60% on FreeBSD but none on Linux) > >> > >> test2: iperf using 1 as server: > >> FreeBSD(*) Linux > >> Mbits/sec > >> 1- 926 905 (this machine was busy) > >> 2- 545 798 > >> 3- 910 912 > >> *: did a 'sysctl net.inet.tcp.sendspace=65536' > >> > >> > >> So, it seems to me something is not that good in the UDP department, but > >> I can't find what to tweek. > >> > >> Any help? > >> > >> danny > > > > Have discussed this some internally, the best idea I've heard is that > > UDP is not giving us the interrupt rate that TCP would, so we end up > > not cleaning up as often, and thus descriptors might not be as quickly > > available.. Its just speculation at this point. > If a high interrupt rate is a problem and your NIC+driver supports it, > then try enabling polling(4) aswell. This has helped me for bulk > transfers on slower boxes but i have noticed problems with ALTQ/dummynet > and other highly realtime dependent networking code. YMMV. > More info in the man 4 polling. > I think recent linux kernels/drivers have this implemented so it will > enable it dynamically on high load. However i only skimmed the documents > and i'm not a linux expert so i may be wrong on that. > /Junics as far as i know, polling only works on UP machines, besides, TCP performance is much better than UDP - which goes against basic instincts. the packets arriving at the NIC get processed - interrupt - before you can tell that they are IP/TCP/UDP, so the iterrupt latency should be the same for all. > > > > Try this: the default is only to have 256 descriptors, try going for > > the MAX > > which is 4K. > > > > Cheers, > > > > Jack