From owner-freebsd-stable@FreeBSD.ORG Fri Dec 23 07:16:11 2005 Return-Path: X-Original-To: freebsd-stable@freebsd.org Delivered-To: freebsd-stable@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 5CFA616A41F; Fri, 23 Dec 2005 07:16:11 +0000 (GMT) (envelope-from danny@cs.huji.ac.il) Received: from cs1.cs.huji.ac.il (cs1.cs.huji.ac.il [132.65.16.10]) by mx1.FreeBSD.org (Postfix) with ESMTP id 5635843D55; Fri, 23 Dec 2005 07:16:10 +0000 (GMT) (envelope-from danny@cs.huji.ac.il) Received: from pampa.cs.huji.ac.il ([132.65.80.32]) by cs1.cs.huji.ac.il with esmtp id 1Eph9s-0002F4-8O; Fri, 23 Dec 2005 09:16:08 +0200 X-Mailer: exmh version 2.7.0 06/18/2004 with nmh-1.0.4 To: Jack Vogel In-Reply-To: Message from Jack Vogel of "Thu, 22 Dec 2005 09:41:47 PST." <2a41acea0512220941y61c9b5acs8053e6df8a96a1e4@mail.gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable Date: Fri, 23 Dec 2005 09:16:08 +0200 From: Danny Braniss Message-ID: Cc: Gleb Smirnoff , freebsd-stable@freebsd.org Subject: Re: em bad performance X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 23 Dec 2005 07:16:11 -0000 > On 12/22/05, Gleb Smirnoff wrote: > > On Thu, Dec 22, 2005 at 12:37:53PM +0200, Danny Braniss wrote: > > D> > On Thu, Dec 22, 2005 at 12:24:42PM +0200, Danny Braniss wrote: > > D> > D> ------------------------------------------------------------ > > D> > D> Server listening on TCP port 5001 > > D> > D> TCP window size: 64.0 KByte (default) > > D> > D> ------------------------------------------------------------ > > D> > D> [ 4] local 132.65.16.100 port 5001 connected with [6.0/SE750= 1WV2] port 58122 > > D> > D> (intel westvill) > > D> > D> [ ID] Interval Transfer Bandwidth > > D> > D> [ 4] 0.0-10.0 sec 1.01 GBytes 867 Mbits/sec > > D> > D> [ 4] local 132.65.16.100 port 5001 connected with [5.4/SE750= 1WV2] port 55269 > > D> > D> (intel westvill) > > D> > D> [ ID] Interval Transfer Bandwidth > > D> > D> [ 4] 0.0-10.0 sec 967 MBytes 811 Mbits/sec > > D> > D> [ 5] local 132.65.16.100 port 5001 connected with [6.0/SR143= 5VP2 port 58363 > > D> > D> (intel dual xeon/emt64) > > D> > D> [ ID] Interval Transfer Bandwidth > > D> > D> [ 5] 0.0-10.0 sec 578 MBytes 485 Mbits/sec > > D> > D> > > D> > D> i've run this several times, and the results are very similar= =2E > > D> > D> i also tried i386, and the same bad results. > > D> > D> all hosts are connected at 1gb to the same switch. > > D> > > > D> > So we see a strong drawback between SE7501WV2 and SR1435VP2. Let= 's compare the NIC > > D> > hardware. Can you plese show pciconf -lv | grep -A3 ^em on both = motherboards? > > D> > > D> on a SE7501WV2: > > D> em0@pci3:7:0: class=3D0x020000 card=3D0x341a8086 chip=3D0x101080= 86 rev=3D0x01 > > D> hdr=3D0x00 > > D> vendor =3D 'Intel Corporation' > > D> device =3D '82546EB Dual Port Gigabit Ethernet Controller (C= opper)' > > D> class =3D network > > D> > > D> on a SR1435VP2: > > D> em0@pci4:3:0: class=3D0x020000 card=3D0x34668086 chip=3D0x107680= 86 rev=3D0x05 > > D> hdr=3D0x00 > > D> vendor =3D 'Intel Corporation' > > D> device =3D '82547EI Gigabit Ethernet Controller' > > D> class =3D network > > > > The first one 82546EB is attached to fast PCI-X bus, and the 82547EI = is > > on CSA bus. The CSA bus is twice faster than old PCI bus, CSA can han= dle > > 266 Mbps. I'm not sure but may be it has same ~50% overhead as old PC= I bus. > > > > Probably our em(4) driver is not optimized enough and does too many a= ccesses > > to the PCI bus, thus utilizing more bandwidth than needed to handle t= raffic. > > In this case we see that NIC on slower bus (but enough to handle Giga= bit) is > > must slower than NIC on faster bus. (This paragraph is my own theory,= it > > can be complete bullshit.) > = > CSA bus? I've never heard of it. > = > To get the best gig performance you really want to see it on PCI Expres= s. > I see 930ish Mb/s. I'm not really familiar with this motherboard/lom. > = > You say you run iperf -s on the server side, but what are you using as > parameters on the client end of the test? > = iperf -c host i'm begining to believe that the problem is elsewhere, i just put in an ethernet nic in a PCI-X/Express slot, and the performance is similar, = bad. danny > Jack