From owner-freebsd-stable@FreeBSD.ORG Thu Dec 22 17:42:10 2005 Return-Path: X-Original-To: freebsd-stable@freebsd.org Delivered-To: freebsd-stable@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id F2D2516A41F for ; Thu, 22 Dec 2005 17:42:09 +0000 (GMT) (envelope-from jfvogel@gmail.com) Received: from zproxy.gmail.com (zproxy.gmail.com [64.233.162.205]) by mx1.FreeBSD.org (Postfix) with ESMTP id DB57C43D67 for ; Thu, 22 Dec 2005 17:41:55 +0000 (GMT) (envelope-from jfvogel@gmail.com) Received: by zproxy.gmail.com with SMTP id 9so460258nzo for ; Thu, 22 Dec 2005 09:41:49 -0800 (PST) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=R1UumK5xijaUPXX4j2cVUjKRj9o+0nu6/iztsitKvL7IOTjCEb5q7/JndE7Do4rVYD7TsObvr1X7KWweYMdsEhf7o7XBMD5HaeC9YP6fdbaqsfGHY0GQgI0gY31uVAPZphq/gGS2gCkKOAyThMgSA/NhERxIWRgkaDt2z3kknLI= Received: by 10.65.153.12 with SMTP id f12mr1129406qbo; Thu, 22 Dec 2005 09:41:48 -0800 (PST) Received: by 10.64.210.8 with HTTP; Thu, 22 Dec 2005 09:41:47 -0800 (PST) Message-ID: <2a41acea0512220941y61c9b5acs8053e6df8a96a1e4@mail.gmail.com> Date: Thu, 22 Dec 2005 09:41:47 -0800 From: Jack Vogel To: Gleb Smirnoff In-Reply-To: <20051222105215.GB41381@cell.sick.ru> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline References: <20051222103027.GZ41381@cell.sick.ru> <20051222105215.GB41381@cell.sick.ru> Cc: freebsd-stable@freebsd.org Subject: Re: em bad performance X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 Dec 2005 17:42:10 -0000 On 12/22/05, Gleb Smirnoff wrote: > On Thu, Dec 22, 2005 at 12:37:53PM +0200, Danny Braniss wrote: > D> > On Thu, Dec 22, 2005 at 12:24:42PM +0200, Danny Braniss wrote: > D> > D> ------------------------------------------------------------ > D> > D> Server listening on TCP port 5001 > D> > D> TCP window size: 64.0 KByte (default) > D> > D> ------------------------------------------------------------ > D> > D> [ 4] local 132.65.16.100 port 5001 connected with [6.0/SE7501WV2= ] port 58122 > D> > D> (intel westvill) > D> > D> [ ID] Interval Transfer Bandwidth > D> > D> [ 4] 0.0-10.0 sec 1.01 GBytes 867 Mbits/sec > D> > D> [ 4] local 132.65.16.100 port 5001 connected with [5.4/SE7501WV2= ] port 55269 > D> > D> (intel westvill) > D> > D> [ ID] Interval Transfer Bandwidth > D> > D> [ 4] 0.0-10.0 sec 967 MBytes 811 Mbits/sec > D> > D> [ 5] local 132.65.16.100 port 5001 connected with [6.0/SR1435VP2= port 58363 > D> > D> (intel dual xeon/emt64) > D> > D> [ ID] Interval Transfer Bandwidth > D> > D> [ 5] 0.0-10.0 sec 578 MBytes 485 Mbits/sec > D> > D> > D> > D> i've run this several times, and the results are very similar. > D> > D> i also tried i386, and the same bad results. > D> > D> all hosts are connected at 1gb to the same switch. > D> > > D> > So we see a strong drawback between SE7501WV2 and SR1435VP2. Let's c= ompare the NIC > D> > hardware. Can you plese show pciconf -lv | grep -A3 ^em on both moth= erboards? > D> > D> on a SE7501WV2: > D> em0@pci3:7:0: class=3D0x020000 card=3D0x341a8086 chip=3D0x10108086 r= ev=3D0x01 > D> hdr=3D0x00 > D> vendor =3D 'Intel Corporation' > D> device =3D '82546EB Dual Port Gigabit Ethernet Controller (Coppe= r)' > D> class =3D network > D> > D> on a SR1435VP2: > D> em0@pci4:3:0: class=3D0x020000 card=3D0x34668086 chip=3D0x10768086 r= ev=3D0x05 > D> hdr=3D0x00 > D> vendor =3D 'Intel Corporation' > D> device =3D '82547EI Gigabit Ethernet Controller' > D> class =3D network > > The first one 82546EB is attached to fast PCI-X bus, and the 82547EI is > on CSA bus. The CSA bus is twice faster than old PCI bus, CSA can handle > 266 Mbps. I'm not sure but may be it has same ~50% overhead as old PCI bu= s. > > Probably our em(4) driver is not optimized enough and does too many acces= ses > to the PCI bus, thus utilizing more bandwidth than needed to handle traff= ic. > In this case we see that NIC on slower bus (but enough to handle Gigabit)= is > must slower than NIC on faster bus. (This paragraph is my own theory, it > can be complete bullshit.) CSA bus? I've never heard of it. To get the best gig performance you really want to see it on PCI Express. I see 930ish Mb/s. I'm not really familiar with this motherboard/lom. You say you run iperf -s on the server side, but what are you using as parameters on the client end of the test? Jack