Date: Fri, 13 Sep 2024 08:36:59 +0100 From: Sad Clouds <cryintothebluesky@gmail.com> Cc: Paul Procacci <pprocacci@gmail.com>, freebsd-net@freebsd.org Subject: Re: Performance issues with vnet jails + epair + bridge Message-ID: <20240913083659.443548a87559c3cbaba4e9d8@gmail.com> In-Reply-To: <20240913080356.98ea2c352595ae0bbd9f0ce8@gmail.com> References: <20240912181618.7895d10ad5ff2ebae9883192@gmail.com> <CAFbbPujAEer3aO7VcZ1CtgUUCHsG9eXfn_4s6SJok83GFW4JPA@mail.gmail.com> <20240913080356.98ea2c352595ae0bbd9f0ce8@gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, 13 Sep 2024 08:03:56 +0100
Sad Clouds <cryintothebluesky@gmail.com> wrote:
> I built new kernel with "options RSS" however TCP throughput performance
> now decreased from 128 MiB/sec to 106 MiB/sec.
>
> Looks like the problem has shifted from epair to netisr
>
> PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU COMMAND
> 12 root -56 - 0B 272K CPU3 3 3:45 100.00% intr{swi1: netisr 0}
> 11 root 187 ki31 0B 64K RUN 0 9:00 62.41% idle{idle: cpu0}
> 11 root 187 ki31 0B 64K CPU2 2 9:36 61.23% idle{idle: cpu2}
> 11 root 187 ki31 0B 64K RUN 1 8:24 55.03% idle{idle: cpu1}
> 0 root -64 - 0B 656K - 2 0:50 21.50% kernel{epair_task_2}
I think the issue may be to do with the genet driver itself. I think
the hardware is limited to one CPU per send or receive interrupt. On
Linux the best I can do is set SMP affinity for send on CPU0 and for
receive on CPU1, but that still leaves 2 other CPUs idle.
$ cat /proc/interrupts
CPU0 CPU1 CPU2 CPU3
...
37: 74141 0 0 0 GICv2 189 Level eth0
38: 43174 0 0 0 GICv2 190 Level eth0
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20240913083659.443548a87559c3cbaba4e9d8>
