Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 04 Aug 2023 18:56:30 +0000
From:      bugzilla-noreply@freebsd.org
To:        bugs@FreeBSD.org
Subject:   [Bug 272944] Vnet performance issues
Message-ID:  <bug-272944-227@https.bugs.freebsd.org/bugzilla/>

next in thread | raw e-mail | index | archive | help
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D272944

            Bug ID: 272944
           Summary: Vnet performance issues
           Product: Base System
           Version: 13.2-RELEASE
          Hardware: Any
                OS: Any
            Status: New
          Severity: Affects Only Me
          Priority: ---
         Component: kern
          Assignee: bugs@FreeBSD.org
          Reporter: jSML4ThWwBID69YC@protonmail.com

Hello,=20

During testing, it was noted that switching to Vnet jails causes a signific=
ant
reduction in network performance. I tested using Iperf3 from the jail to
another node on the local network. Here's the results.=20

This is the performance on a shared network interface. Test is run from ins=
ide
a freshly created jail with no services running.=20

Command: iperf3 -c 192.168.1.24 -4 -P 10
[SUM]   0.00-10.00  sec  42.6 GBytes  36.6 Gbits/sec   21             sender
[SUM]   0.00-10.00  sec  42.6 GBytes  36.6 Gbits/sec                  recei=
ver

This is the results on the same jail using Vnet.
[SUM]   0.00-10.00  sec  17.6 GBytes  15.1 Gbits/sec  363             sender
[SUM]   0.00-10.00  sec  17.5 GBytes  15.0 Gbits/sec                  recei=
ver

Here's the relevant jail configuration for the shared network vs Vnet.=20
# Shared network configuration=20
     interface =3D "lagg0";
     ip4.addr =3D 192.168.1.140;

# Vnet configuration
    $id     =3D "140";
    $ipaddr =3D "192.168.1.${id}";
    $mask   =3D "255.255.255.0";
    $gw     =3D "192.168.1.1";
    vnet;
    vnet.interface =3D "epair${id}b";
    exec.prestart   =3D "ifconfig epair${id} create up";
    exec.prestart  +=3D "ifconfig epair${id}a up descr vnet-${name}";
    exec.prestart  +=3D "ifconfig epair${id}a mtu 9000";
    exec.prestart  +=3D "ifconfig epair${id}b mtu 9000";
    exec.prestart  +=3D "ifconfig bridge0 addm epair${id}a up";
    exec.start      =3D "/sbin/ifconfig lo0 127.0.0.1 up";
    exec.start     +=3D "/sbin/ifconfig epair${id}b ${ipaddr} netmask ${mas=
k}
up";
    exec.start     +=3D "/sbin/route add default ${gw}";

Other data.
Underlying network is a LACP (lagg0) based 40Gb with vlans. Configured as
follows on the base system, with IP address removed. Note, the Vlans are not
used in the jail at all.=20

ifconfig_mlxen0=3D"up mtu 9000"
ifconfig_mlxen1=3D"up mtu 9000"
cloned_interfaces=3D"lagg0 vlan0 vlan1 vlan2 bridge0"
ifconfig_lagg0=3D"laggproto lacp laggport mlxen0 laggport mlxen1 IP-ADDR/24"
ifconfig_bridge0=3D"addm lagg0 up"
ifconfig_vlan0=3D"inet IP-ADDR/24 vlan 3 vlandev lagg0"
ifconfig_vlan1=3D"inet IP-ADDR/24 vlan 4 vlandev lagg0"
ifconfig_vlan2=3D"inet IP-ADDR/24 vlan 5 vlandev lagg0"
defaultrouter=3D"192.168.1.1"

Epair interfaces:=20
For some reason the epair0(A|B) interfaces show 10GB even though they are o=
n a
40GB bridge. Even though they show 10Gb, the test sends data faster than the
interface speed. EX: 15.1Gb/s from above.=20

My question is why the huge performance difference?=20
Is it my configuration that is wrong?=20
Is the Vnet overhead simply that high?=20
Are there network interface flags I should be using for Vnet? (tx|rxsum, lr=
o,
tso, etc?)

Reporting it as bug because I'm guessing a 50%+ reduction in performance is=
 not
intended.

--=20
You are receiving this mail because:
You are the assignee for the bug.=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?bug-272944-227>