Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 2 Dec 2009 19:35:32 -0500
From:      "Robert N. M. Watson" <rwatson@freebsd.org>
To:        Elliot Finley <efinley.lists@gmail.com>, Lawrence Stewart <lstewart@freebsd.org>
Cc:        stable@freebsd.org, Jack Vogel <jfvogel@gmail.com>
Subject:   Re: em interface slow down on 8.0R
Message-ID:  <50BAEB20-4C91-43D6-B266-1081C684D19E@freebsd.org>
In-Reply-To: <54e63c320912010905u51ccbc92o56ebb71af2630166@mail.gmail.com>
References:  <20091130.170451.24460248.hrs@allbsd.org> <2a41acea0911301119j1449be58y183f2fe1d1112a68@mail.gmail.com> <20091201.102925.218343479.hrs@allbsd.org> <54e63c320912010905u51ccbc92o56ebb71af2630166@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help

On 1 Dec 2009, at 12:05, Elliot Finley wrote:

> On Mon, Nov 30, 2009 at 6:29 PM, Hiroki Sato <hrs@freebsd.org> wrote:
> Jack Vogel <jfvogel@gmail.com> wrote
>  in <2a41acea0911301119j1449be58y183f2fe1d1112a68@mail.gmail.com>:
>=20
> jf> I will look into this Hiroki, as time goes the older hardware does =
not
> jf> always
> jf> get test cycles like one might wish.
>=20
>=20
> Here's some more info to throw into the mix.  I have several new boxes =
running 8-Stable (a few hours after release).
>=20
> Leaving all sysctl at default, I get around 400mbps testing with =
netperf or iperf.  If I set the following on the box running 'netserver' =
or 'iperf -s':
>=20
> kern.ipc.maxsockbuf=3D16777216
> net.inet.tcp.recvspace=3D1048576
>=20
> then I can get around 926mbps.  But then if I make those same changes =
on the box running the client side of netperf or iperf the performance =
drops back down to around 400mbps.
>=20
> All boxes have the same hardware.  they have two 4-port Intel NICS in =
them.
>=20
> em1@pci0:5:0:1: class=3D0x020000 card=3D0x10a48086 chip=3D0x10a48086 =
rev=3D0x06 hdr=3D0x00
>     vendor     =3D 'Intel Corporation'
>     device     =3D '82571EB Gigabit Ethernet Controller'
>     class      =3D network
>     subclass   =3D ethernet
>=20
> any pointers on further network tuning to get bidirectional link =
saturation would be much appreciated.  These boxes are not in production =
yet, so anyone that would like to have access to troubleshoot, just ask.

I've CC'd Lawrence Stewart in on this thread, as he's been doing work on =
the TCP stack lately and might have insight into what you might be =
running into. Lawrence -- there's a bit of a back thread with =
configuration and problem details in the stable@ archives.

Robert=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?50BAEB20-4C91-43D6-B266-1081C684D19E>