Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 3 Feb 2016 12:50:34 -0500
From:      Allan Jude <allanjude@freebsd.org>
To:        freebsd-performance@freebsd.org
Subject:   Re: ixgbe: Network performance tuning (#TCP connections)
Message-ID:  <56B23DEA.1060307@freebsd.org>
In-Reply-To: <EC88118611AE564AB0B10C6A4569004D0137D57AEB@HOBEX11.hob.de>
References:  <EC88118611AE564AB0B10C6A4569004D0137D57AEB@HOBEX11.hob.de>

next in thread | previous in thread | raw e-mail | index | archive | help
This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--LhlsgOkJpnPBQLLK0bCCsMJ9mmIm32k3i
Content-Type: text/plain; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

On 2016-02-03 08:37, Meyer, Wolfgang wrote:
> Hello,
>=20
> we are evaluating network performance on a DELL-Server (PowerEdge R930 =
with 4 Sockets, hw.model: Intel(R) Xeon(R) CPU E7-8891 v3 @ 2.80GHz) with=
 10 GbE-Cards. We use programs that on server side accepts connections on=
 a IP-address+port from the client side and after establishing the connec=
tion data is sent in turns between server and client in a predefined patt=
ern (server side sends more data than client side) with sleeps in between=
 the send phases. The test set-up is chosen in such way that every client=
 process initiates 500 connections handled in threads and on the server s=
ide each process representing an IP/Port pair also handles 500 connection=
s in threads.
>=20
> The number of connections is then increased and the overall network thr=
ougput is observed using nload. On FreeBSD (on server side) roughly at 50=
,000 connections errors begin to occur and the overall throughput won't i=
ncrease further with more connections. With Linux on the server side it i=
s possible to establish more than 120,000 connections and at 50,000 conne=
ctions the overall throughput ist double that of FreeBSD with the same se=
nding pattern. Furthermore system load on FreeBSD is much higher with 50 =
% system usage on each core and 80 % interrupt usage on the 8 cores handl=
ing the interrupt queues for the NIC. In comparison Linux has <10 % syste=
m usage, <10 % user usage and about 15 % interrupt usage on the 16 cores =
handling the network interrupts for 50,000 connections.
>=20
> Varying the numbers for the NIC interrupt queues won't change the perfo=
rmance (rather worsens the situation). Disabling Hyperthreading (utilisin=
g 40 cores) degrades the performance. Increasing MAXCPU to utilise all 80=
 cores won't improve compared to 64 cores, atkbd and uart had to be disab=
led to avoid kernel panics with increased MAXCPU (thanks to Andre Opperma=
nn for investigating this). Initiallly the tests were made on 10.2 Releas=
e, later I switched to 10 Stable (later with ixgbe driver version 3.1.0) =
but that didn't change the numbers.
>=20
> Some sysctl configurables were modified along the network performance g=
uidelines found on the net (e.g. https://calomel.org/freebsd_network_tuni=
ng.html, https://www.freebsd.org/doc/handbook/configtuning-kernel-limits.=
html, https://pleiades.ucsc.edu/hyades/FreeBSD_Network_Tuning) but most o=
f them didn't have any measuarable impact. Final sysctl.conf and loader.c=
onf settings see below. Actually the only tunables that provided any impr=
ovement were identified to be hw.ix.txd, and hw.ix.rxd that were reduced =
(!) to the minimum value of 64 and hw.ix.tx_process_limit and hw.ix.rx_pr=
ocess_limit that were set to -1.
>=20
> Any ideas what tunables might be changed to get a higher number of TCP =
connections (it's not a question of the overall throughput as changing th=
e sending pattern allows me to fully utilise the 10Gb bandwidth)? How can=
 I determine where the kernel is spending its time that causes the high C=
PU load? Any pointers are highly appreciated, I can't believe that there =
is such a blatant difference in network performance compared to Linux.
>=20
> Regards,
> Wolfgang
>=20

I wonder if this might be NUMA related. Specifically, it might help to
make sure that the 8 CPU cores that the NIC queues are pinned to, are on
the same CPU that is backing the PCI-E slot that the NIC is in.


--=20
Allan Jude


--LhlsgOkJpnPBQLLK0bCCsMJ9mmIm32k3i
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (MingW32)

iQIcBAEBAgAGBQJWsj3tAAoJEBmVNT4SmAt+CgIP/2pcubjW1rIy5T91UFIuztD7
kNj5hF7H5C8/h3U998OwCMPAgRJKfnOjDybVaHb3QNwGYKsMiAUietVbFTF4k928
oR8HxD/xuNYMpdicVeB48winvEOA8ilaFjbaKYE4dNX7XaNk/5+610PrRRkwJfuE
dN96ycI83nrEMJFHJFFu88bnfrjX0oP26AlqsN2G8hqExQwPVERgLniNN59cx/9x
4aUQO8w4W62PWvc63x5/rjMVKBjlnnAxt89kGq6doCseLWkrdnQHhr2rlha3vEG5
zEQyPTcoN+kb2wxzpnSBWppxZ9mGdCDmz2HyLMD0nF8ypTgJIMl4OuhfPPWHS+i4
ogRQkq3UWyUtjQhd+ZF8lXGYkzyDR1yf9vAS7r/RxkJFCcvr7SVZO+ADDzkMbt9S
jz+re8irH+pV1fAEA4NFYPeQL8ngs96Y37sjH/nZovuUDT6elUmwBcMxzbQTC44r
c0xpPfkcH8c0lqg6OYUrIqoQULVjCr5swriq6lb8ygfpLHzcgyAeZHiAyQGo0fDb
ieXI9uJ2RGXVYslX+cCfxypz22Y7ULfKzQjeeuf4Z80DdX7AnfvgOqeRikncFv6Q
zg+dNc2dbSOs5zywPYScmSpFZjVMh1MJG7g1Y8Uig1BY5+JJ7Bqztis9i9drCmyE
YDu7R9zyqEr2VAKVy2s/
=nUC0
-----END PGP SIGNATURE-----

--LhlsgOkJpnPBQLLK0bCCsMJ9mmIm32k3i--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?56B23DEA.1060307>