Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 21 Aug 2009 01:13:33 +0200
From:      Ivan Voras <ivoras@freebsd.org>
To:        freebsd-performance@freebsd.org
Subject:   Re: Strange CPU distributionat very high level bandwidth
Message-ID:  <h6klb5$tv8$1@ger.gmane.org>
In-Reply-To: <36A93B31228D3B49B691AD31652BCAE9A456967AF4@GRFMBX702BA020.griffon.local>
References:  <36A93B31228D3B49B691AD31652BCAE9A456967AF4@GRFMBX702BA020.griffon.local>

next in thread | previous in thread | raw e-mail | index | archive | help
This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--------------enig1D5DFF69EFFC281F77A992CB
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Invernizzi Fabrizio wrote:
> Hi all
>=20
> i am going on with some performance tests on a 10gbe network card with =
FreeBSD.
>=20
> I am doing this test: I send UDP traffic to be forwarded to the other p=
ort of the card on both the card ports.
> Using 1492-long packets i am uppering the number  of packets per second=
 i sent In order to see wich is the maximum bandwidth (or pps) the system=
 can support without losses.
>=20
> The limit seems to be about 1890Mbps per port (3870 Mbps total).
> Looking more in deep the CPU behaviour i see this :
>   - uppering the sent pps results in uppering the intterrupt time (abou=
t 90%)
>   - when i am very strict to the limit, interrupt time falls to about 1=
0% and CPU is always (85%) in system (rx/tx driver procedure)
>=20
> Questions:
> - Is not the AIM intended to contrast this behaviour to limit interrupt=
s sent to CPU? (nothing changes if i disable it)
> - Why does the system start loosing pkts in that condition?
> - Why does the system seem to perform better when it is managing more c=
ontext switches?
>=20

> - FreeBSD 7.2-RELEASE (64 bit)

One idea for you, not directly tied to forwarding as is but to the
recent development of multithreaded packet acceptance code, is to use
8.x (currently in BETA so usual precautions about debugging being
enabled apply) and then play with netisr and worker thread settings.

See the source here:

http://svn.freebsd.org/viewvc/base/head/sys/net/netisr.c?view=3Dmarkup&pa=
threv=3D195078

and the comments starting at "Three direct dispatch policies are supporte=
d".

The code is experimental and thus disabled in 8.0 unless a combination
of the following loader tunables are set:

net.isr.direct_force
net.isr.direct
net.isr.maxthreads
net.isr.bindthreads

I think you can start simply by turning off net.isr.direct_force and
then start increasing net.isr.maxthreads until the benefits (if any) go
away. Since it is experimental code, your benchmarks would be nice to hav=
e.




--------------enig1D5DFF69EFFC281F77A992CB
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkqN2J0ACgkQldnAQVacBciBiACgwx2WawE6JmJmGzKuF8Vy0lMq
3jkAoIfLa5mWZ+DV87UxyHOzojxozWe7
=eyO+
-----END PGP SIGNATURE-----

--------------enig1D5DFF69EFFC281F77A992CB--




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?h6klb5$tv8$1>