Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 4 Aug 2016 20:53:34 +0200
From:      Ben RUBSON <ben.rubson@gmail.com>
To:        freebsd-net <freebsd-net@freebsd.org>
Subject:   Re: Unstable local network throughput
Message-ID:  <27223F4B-6DE8-49D0-98C0-9F734C73C5EC@gmail.com>
In-Reply-To: <CAFMmRNz8WryZVVR-_OvB7Ad3tR1NqPpXpv_QEPkoffxdFzdUQw@mail.gmail.com>
References:  <3C0D892F-2BE8-4650-B9FC-93C8EE0443E1@gmail.com> <bed13ae3-0b8f-b1af-7418-7bf1b9fc74bc@selasky.org> <3B164B7B-CBFB-4518-B57D-A96EABB71647@gmail.com> <5D6DF8EA-D9AA-4617-8561-2D7E22A738C3@gmail.com> <06E414D5-9CDA-46D1-A26F-0B07E76FDB34@gmail.com> <0b14bf39-ed71-b9fb-1998-bd9676466df6@selasky.org> <E5BE8DAC-AB6A-491E-A901-4E513367278B@gmail.com> <CAFMmRNz8WryZVVR-_OvB7Ad3tR1NqPpXpv_QEPkoffxdFzdUQw@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help

> On 04 Aug 2016, at 20:15, Ryan Stone <rysto32@gmail.com> wrote:
>=20
> On Thu, Aug 4, 2016 at 11:33 AM, Ben RUBSON <ben.rubson@gmail.com> =
wrote:
> But even without RSS, I should be able to go up to 2x40Gbps, don't you =
think so ?
> Nobody already did this ?
>=20
> Try this patch, which should improve performance when multiple TCP =
streams are running in parallel over an mlx4_en port:
>=20
> https://people.freebsd.org/~rstone/patches/mlxen_counters.diff

Thank you very much Ryan.
I just tried it, but it does not help :/

Below is the cpuload during bidirectional trafic.
We clearly see the 4 CPUs allocated to Mellanox IRQs, the others to =
iPerf processes.
No improvement if IRQs are spread over the 12 NUMA CPUs, but slightly =
less throughput.
Note that I get the same results if I only use 2 CPUs for IRQs.

27 processes:  1 running, 26 sleeping
CPU 0:   1.1% user,  0.0% nice, 16.7% system,  0.0% interrupt, 82.2% =
idle
CPU 1:   1.1% user,  0.0% nice, 18.9% system,  0.0% interrupt, 80.0% =
idle
CPU 2:   1.9% user,  0.0% nice, 17.8% system,  0.0% interrupt, 80.4% =
idle
CPU 3:   1.1% user,  0.0% nice, 15.2% system,  0.0% interrupt, 83.7% =
idle
CPU 4:   0.4% user,  0.0% nice, 16.3% system,  0.0% interrupt, 83.3% =
idle
CPU 5:   1.1% user,  0.0% nice, 14.4% system,  0.0% interrupt, 84.4% =
idle
CPU 6:   2.6% user,  0.0% nice, 17.4% system,  0.0% interrupt, 80.0% =
idle
CPU 7:   2.2% user,  0.0% nice, 15.2% system,  0.0% interrupt, 82.6% =
idle
CPU 8:   1.1% user,  0.0% nice,  3.0% system, 15.9% interrupt, 80.0% =
idle
CPU 9:   0.0% user,  0.0% nice,  3.0% system, 32.2% interrupt, 64.8% =
idle
CPU 10:  0.0% user,  0.0% nice,  0.4% system, 58.9% interrupt, 40.7% =
idle
CPU 11:  0.0% user,  0.0% nice,  0.4% system, 77.4% interrupt, 22.2% =
idle
CPU 12:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% =
idle
CPU 13:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% =
idle
CPU 14:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% =
idle
CPU 15:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% =
idle
CPU 16:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% =
idle
CPU 17:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% =
idle
CPU 18:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% =
idle
CPU 19:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% =
idle
CPU 20:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% =
idle
CPU 21:  0.0% user,  0.0% nice,  0.0% system,  0.4% interrupt, 99.6% =
idle
CPU 22:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% =
idle
CPU 23:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% =
idle=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?27223F4B-6DE8-49D0-98C0-9F734C73C5EC>