Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 06 Sep 2006 08:58:11 +0300
From:      Danny Braniss <danny@cs.huji.ac.il>
To:        Thomas Herrlin <junics-fbsdstable@atlantis.maniacs.se>
Cc:        freebsd-net@freebsd.org, freebsd-stable@freebsd.org
Subject:   Re: tcp/udp performance 
Message-ID:  <E1GKqQN-000HHF-O2@cs1.cs.huji.ac.il>
In-Reply-To: Your message of Tue, 05 Sep 2006 18:21:07 %2B0200 .

next in thread | raw e-mail | index | archive | help
> Jack Vogel wrote:
> > On 8/30/06, Danny Braniss <danny@cs.huji.ac.il> wrote:
> >>
> >> ever since 6.1 I've seen fluctuations in the performance of
> >> the em (Intel(R) PRO/1000 Gigabit Ethernet).
> >>
> >>             motherboard                 OBN (On Board NIC)
> >>             ----------------            ------------------
> >>         1- Intel SE7501WV2S             Intel 82546EB::2.1
> >>         2- Intel SE7320VP2D2            INTEL 82541
> >>         3- Sun Fire X4100 Server        Intel(R) PRO/1000
> >>
> >> test 1: writing to a NetApp filer via NFS/UDP
> >>            FreeBSD              Linux
> >>                       MegaBytes/sec
> >>         1- Average: 18.48       32.61
> >>         2- Average: 15.69       35.72
> >>         3- Average: 16.61       29.69
> >> (interstingly, doing NFS/TCP instead of NFS/UDP shows an increase in
> >> speed of
> >> around 60% on FreeBSD but none on Linux)
> >>
> >> test2: iperf using 1 as server:
> >>                 FreeBSD(*)      Linux
> >>                      Mbits/sec
> >>         1-      926             905 (this machine was busy)
> >>         2-      545             798
> >>         3-      910             912
> >>  *: did a 'sysctl net.inet.tcp.sendspace=65536'
> >>
> >>
> >> So, it seems to me something is not that good in the UDP department, but
> >> I can't find what to tweek.
> >>
> >> Any help?
> >>
> >>         danny
> >
> > Have discussed this some internally, the best idea I've heard is that
> > UDP is not giving us the interrupt rate that TCP would, so we end up
> > not cleaning up as often, and thus descriptors might not be as quickly
> > available.. Its just speculation at this point.
> If a high interrupt rate is a problem and your NIC+driver supports it,
> then try enabling polling(4) aswell. This has helped me for bulk
> transfers on slower boxes but i have noticed problems with ALTQ/dummynet
> and other highly realtime dependent networking code. YMMV.
> More info in the man 4 polling.
> I think recent linux kernels/drivers have this implemented so it will
> enable it dynamically on high load. However i only skimmed the documents
> and i'm not a linux expert so i may be wrong on that.
> /Junics

as far as i know, polling only works on UP machines, besides, TCP performance
is much better than UDP - which goes against basic instincts.
the packets arriving at the NIC get processed - interrupt - before you can tell
that they are IP/TCP/UDP, so the iterrupt latency should be the same for all.

> >
> > Try this: the default is only to have 256 descriptors, try going for
> > the MAX
> > which is 4K.
> >
> > Cheers,
> >
> > Jack






Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?E1GKqQN-000HHF-O2>