Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 20 Oct 2005 16:52:21 +0100 (BST)
From:      Robert Watson <rwatson@FreeBSD.org>
To:        Michael VInce <mv@roq.com>
Cc:        freebsd-net@freebsd.org, stable@freebsd.org, =?ISO-8859-1?Q?Sten_Daniel_S=F8rsdal?= <lists@wm-access.no>
Subject:   Re: Network performance 6.0 with netperf
Message-ID:  <20051020165029.C28249@fledge.watson.org>
In-Reply-To: <43579259.8060701@roq.com>
References:  <434FABCC.2060709@roq.com> <20051014205434.C66245@fledge.watson.org> <43564800.3010309@roq.com> <4356BBA1.3000103@wm-access.no> <43579259.8060701@roq.com>

next in thread | previous in thread | raw e-mail | index | archive | help

On Thu, 20 Oct 2005, Michael VInce wrote:

>> Are you by any chance using PCI NIC's? PCI Bus is limited to somewhere 
>> around 1 Gbit/s. So if you consider; Theoretical maxium = ( 1Gbps - 
>> pci_overhead )
>> 
> The 4 ethernet ports on the Dell server are all built-in so I am 
> assuming they are on the best bus available.

At the performance levels you're interested in, it is worth spending a bit 
of time digging up the specs for the motherboard.  You may find, for 
example, that you can achieve higher packet rates using specific 
combinations of interfaces on the box, as it is often the case that a 
single PCI bus will run to a pair of on-board chips.  By forwarding on 
separate busses, you avoid contention, interrupt issues, etc.  We have a 
number of test systems in our netperf test cluster where you can measure 
20% or more differences on some tests depending on the combinations of 
interfaces used.

Robert N M Watson



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20051020165029.C28249>