Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 19 Oct 2019 20:23:25 +0300
From:      Paul <devgs@ukr.net>
To:        Michael Tuexen <michael.tuexen@lurchi.franken.de>
Cc:        freebsd-net@freebsd.org, freebsd-stable@freebsd.org
Subject:   Re[2]: Network anomalies after update from 11.2 STABLE to 12.1 STABLE
Message-ID:  <1571505335.800858000.sqrselsr@frv39.fwdcdn.com>
In-Reply-To: <F80C784B-1653-4CEB-B131-E7FAC5F55675@lurchi.franken.de>
References:  <1571499556.409350000.a1ewtyar@frv39.fwdcdn.com> <F80C784B-1653-4CEB-B131-E7FAC5F55675@lurchi.franken.de>

next in thread | previous in thread | raw e-mail | index | archive | help


19 October 2019, 19:35:24, by "Michael Tuexen" <michael.tuexen@lurchi.franken.de>:

> > On 19. Oct 2019, at 18:09, Paul <devgs@ukr.net> wrote:
> > 
> > Hi Michael,
> > 
> > Thank you, for taking your time!
> > 
> > We use physical machines. We don not have any special `pf` rules. 
> > Both sides ran `pfctl -d` before testing.
> Hi Paul,
> 
> OK. How are the physical machines connected to each other?

We have tested different connections. The old, copper ethernet, cable, 
as well as optics connection with an identical outcome. Machines are 
connected through Juniper QFX5100.


> 
> What happens when you don't use a lagg interface, but the physical ones?
> 
> (Trying to localise the problem...)

Same thing, lagg does not change anything. Originally, the problem was 
observed on a regular interface.


We have tested a on different hardware. Results are consistently
stable on 11.2-STABLE and consistently unstable on 12.1-STABLE.
The only unchanged thing is the network card vendor, it's Intel.

> 
> Best regards
> Michael
> > 
> > 
> > `nginx` config is primitive, no secrets there:
> > 
> > -------------------------------------------------------------------
> > user  www;
> > worker_processes  auto;
> > 
> > error_log  /var/log/nginx/error.log warn;
> > 
> > events {
> >     worker_connections  81920;
> >     kqueue_changes  4096;
> >     use kqueue;
> > }
> > 
> > http {
> >     include                     mime.types;
> >     default_type                application/octet-stream;
> > 
> >     sendfile                    off;
> >     keepalive_timeout           65;
> >     tcp_nopush                  on;
> >     tcp_nodelay                 on;
> > 
> >     # Logging
> >     log_format  main            '$remote_addr - $remote_user [$time_local] "$request" '
> >                                 '$status $request_length $body_bytes_sent "$http_referer" '
> >                                 '"$http_user_agent" "$http_x_real_ip" "$realip_remote_addr" "$request_completion" "$request_time" '
> >                                 '"$request_body"';
> > 
> >     access_log                  /var/log/nginx/access.log  main;
> > 
> >     server {
> >         listen                  80 default;
> > 
> >         server_name             localhost _;
> > 
> >         location / {
> >             return 404;
> >         }
> >     }
> > }
> > -------------------------------------------------------------------
> > 
> > 
> > `wrk` is compiled with a default configuration. We test like this:
> > 
> > `wrk -c 10 --header "Connection: close" -d 10 -t 1 --latency http://10.10.10.92:80/missing`
> > 
> > 
> > Also, it seems that our issue, and the one described in this thread, are identical:
> > 
> >    https://lists.freebsd.org/pipermail/freebsd-net/2019-June/053667.html
> > 
> > We both have the Intel network cards, BTW. Our network cards are these:
> > 
> > em0 at pci0:10:0:0:        class=0x020000 card=0x000015d9 chip=0x10d38086 rev=0x00 hdr=0x00
> >     vendor     = 'Intel Corporation'
> >     device     = '82574L Gigabit Network Connection'
> > 
> > ixl0 at pci0:4:0:0:        class=0x020000 card=0x00078086 chip=0x15728086 rev=0x01 hdr=0x00
> >     vendor     = 'Intel Corporation'
> >     device     = 'Ethernet Controller X710 for 10GbE SFP+'
> > 
> > 
> > ==============================
> > 
> > Additional info:
> > 
> > During the tests, we have bonded two interfaces into a lagg:
> > 
> > ixl0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
> >         options=c500b8<VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,VLAN_HWFILTER,VLAN_HWTSO,TXCSUM_IPV6>
> >         ether 3c:fd:fe:aa:60:20
> >         media: Ethernet autoselect (10Gbase-SR <full-duplex>)
> >         status: active
> >         nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
> > ixl1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
> >         options=c500b8<VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,VLAN_HWFILTER,VLAN_HWTSO,TXCSUM_IPV6>
> >         ether 3c:fd:fe:aa:60:20
> >         hwaddr 3c:fd:fe:aa:60:21
> >         media: Ethernet autoselect (10Gbase-SR <full-duplex>)
> >         status: active
> >         nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
> > 
> > 
> > lagg0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
> >         options=c500b8<VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,VLAN_HWFILTER,VLAN_HWTSO,TXCSUM_IPV6>
> >         ether 3c:fd:fe:aa:60:20
> >         inet 10.10.10.92 netmask 0xffff0000 broadcast 10.10.255.255
> >         laggproto failover lagghash l2,l3,l4
> >         laggport: ixl0 flags=5<MASTER,ACTIVE>
> >         laggport: ixl1 flags=0<>
> >         groups: lagg
> >         media: Ethernet autoselect
> >         status: active
> >         nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
> > 
> > using this config:
> > 
> >     ifconfig_ixl0="up -lro -tso -rxcsum -txcsum"  (tried different options - got the same outcome)
> >     ifconfig_ixl1="up -lro -tso -rxcsum -txcsum"
> >     ifconfig_lagg0="laggproto failover laggport ixl0 laggport ixl1 10.10.10.92/24"
> > 
> > 
> > We have randomly picked `ixl0` and restricted number of RX/TX queues to 1:
> >     /boot/loader.conf :
> >     dev.ixl.0.iflib.override_ntxqs=1
> >     dev.ixl.0.iflib.override_nrxqs=1
> > 
> > leaving `ixl1` with a default number, matching number of cores (6).
> > 
> > 
> >     ixl0: <Intel(R) Ethernet Controller X710 for 10GbE SFP+ - 2.1.0-k> mem 0xf8800000-0xf8ffffff,0xf9808000-0xf980ffff irq 40 at device 0.0 on pci4
> >     ixl0: fw 5.0.40043 api 1.5 nvm 5.05 etid 80002927 oem 1.261.0
> >     ixl0: PF-ID[0]: VFs 64, MSI-X 129, VF MSI-X 5, QPs 768, I2C
> >     ixl0: Using 1024 TX descriptors and 1024 RX descriptors
> >     ixl0: Using 1 RX queues 1 TX queues
> >     ixl0: Using MSI-X interrupts with 2 vectors
> >     ixl0: Ethernet address: 3c:fd:fe:aa:60:20
> >     ixl0: Allocating 1 queues for PF LAN VSI; 1 queues active
> >     ixl0: PCI Express Bus: Speed 8.0GT/s Width x4
> >     ixl0: SR-IOV ready
> >     ixl0: netmap queues/slots: TX 1/1024, RX 1/1024
> >     ixl1: <Intel(R) Ethernet Controller X710 for 10GbE SFP+ - 2.1.0-k> mem 0xf8000000-0xf87fffff,0xf9800000-0xf9807fff irq 40 at device 0.1 on pci4
> >     ixl1: fw 5.0.40043 api 1.5 nvm 5.05 etid 80002927 oem 1.261.0
> >     ixl1: PF-ID[1]: VFs 64, MSI-X 129, VF MSI-X 5, QPs 768, I2C
> >     ixl1: Using 1024 TX descriptors and 1024 RX descriptors
> >     ixl1: Using 6 RX queues 6 TX queues
> >     ixl1: Using MSI-X interrupts with 7 vectors
> >     ixl1: Ethernet address: 3c:fd:fe:aa:60:21
> >     ixl1: Allocating 8 queues for PF LAN VSI; 6 queues active
> >     ixl1: PCI Express Bus: Speed 8.0GT/s Width x4
> >     ixl1: SR-IOV ready
> >     ixl1: netmap queues/slots: TX 6/1024, RX 6/1024
> > 
> > 
> > This allowed us easy switch between different configurations without
> > the need to reboot, by simply shutting down one interface or the other:
> > 
> >     `ifconfig XXX down`
> > 
> > When testing `ixl0` that runs only a single queue:
> >     ixl0: Using 1 RX queues 1 TX queues
> >     ixl0: netmap queues/slots: TX 1/1024, RX 1/1024
> > 
> > we've got these results:
> > 
> > `wrk -c 10 --header "Connection: close" -d 10 -t 1 --latency http://10.10.10.92:80/missing`
> > Running 10s test @ http://10.10.10.92:80/missing
> >   1 threads and 10 connections
> >   Thread Stats   Avg      Stdev     Max   +/- Stdev
> >     Latency   281.31us  297.74us  22.66ms   99.70%
> >     Req/Sec    19.91k     2.79k   21.25k    97.59%
> >   Latency Distribution
> >      50%  266.00us
> >      75%  309.00us
> >      90%  374.00us
> >      99%  490.00us
> >   164440 requests in 10.02s, 47.52MB read
> >   Socket errors: read 0, write 0, timeout 0
> >   Non-2xx or 3xx responses: 164440
> > Requests/sec:  16412.09
> > Transfer/sec:      4.74MB
> > 
> > 
> > When testing `ixl1` that runs 6 queues:
> >     ixl1: Using 6 RX queues 6 TX queues
> >     ixl1: netmap queues/slots: TX 6/1024, RX 6/1024
> > 
> > we've got these results:
> > 
> > `wrk -c 10 --header "Connection: close" -d 10 -t 1 --latency http://10.10.10.92:80/missing`
> > Running 10s test @ http://10.10.10.92:80/missing
> >   1 threads and 10 connections
> >   Thread Stats   Avg      Stdev     Max   +/- Stdev
> >     Latency   216.16us   71.97us 511.00us   47.56%
> >     Req/Sec     4.34k     2.76k   15.44k    83.17%
> >   Latency Distribution
> >      50%  216.00us
> >      75%  276.00us
> >      90%  312.00us
> >      99%  365.00us
> >   43616 requests in 10.10s, 12.60MB read
> >   Socket errors: connect 0, read 24, write 8, timeout 0
> >   Non-2xx or 3xx responses: 43616
> > Requests/sec:   4318.26
> > Transfer/sec:      1.25MB
> > 
> > Do note, that, not only multiple queues cause issues they also dramatically  
> > decrease the performance of the network. 
> > 
> > Using `sysctl -w net.inet.tcp.ts_offset_per_conn=0` didn't help at all.
> > 
> > Best regards,
> > -Paul
> > 
> > 
> 
> 




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1571505335.800858000.sqrselsr>