Date: Tue, 2 Dec 2025 11:10:53 -0500 From: mike tancsa <mike@sentex.net> To: Sad Clouds <cryintothebluesky@gmail.com> Cc: FreeBSD Questions <freebsd-questions@freebsd.org> Subject: Re: Strange Chelsio performance (T6225-CR) Message-ID: <1e31a376-6b00-451a-ba1c-b7cf88bac999@sentex.net> In-Reply-To: <20251202154743.41939bf415766de73ecb558e@gmail.com> References: <4503cca3-dd1d-4bd0-a82c-f9c2866f468a@sentex.net> <20251202154743.41939bf415766de73ecb558e@gmail.com>
index | next in thread | previous in thread | raw e-mail
On 12/2/2025 10:47 AM, Sad Clouds wrote: > On Mon, 1 Dec 2025 13:01:28 -0500 > mike tancsa <mike@sentex.net> wrote: > >> TL;DR. I can get 10G coming in and out on port 0, Port 1 can only take >> traffic at ~ 2.5 Gb/s incoming, but can send out a full 10G. > I've seen asymmetric throughput performance similar to what you report > with Intel X520-SR2 and when using device passthrough in Bhyve VMs > running FreeBSD-14.3. > > Configuring the switch and network interfaces to use MTU of 9000 bytes > resolved the issue for me. After this, the send and receive throughput > performance has remained stable at around 9 Gb/sec. Thanks for the response. On the one NIC I do indeed have an MTU of 9000, but the other segment is 1500. Whats odd, is that I took the exact same motherboard and Chelsio NICs and setup a test lab and I could not reproduce the issue with or without the MTU settings. I was able to get a full 10G on both ports. We thought maybe something busted about the production motherboard. But when we swapped out the motherboards, the same thing happened :( Going to let things settle for a day and try a mellanox connectx-3 next. I dont think its a tcp thing. I just tested /usr/src/tools/tools/netsend /netsend 192.168.13.254 500 1400 730000 12 Sending packet of payload size 1400 every 0.000001369s for 12 seconds calling time every 100 cycles start: 1764691849.000000000 finish: 1764691861.000107338 send calls: 8765600 send errors: 0 approx send rate: 730466 pps time/packet: 1369 ns approx error rate: 0 waited: 152052418 approx waits/sec: 12671034 approx wait rate: 17 Which is just shy of 9Gb/s ... So it can handle a 12 second UDP blast, but it does indeed overflow some. Before and after sysctl -a dev.cc.1.stats | egrep "flo|tru" dev.cc.1.stats.rx_trunc3: 0 dev.cc.1.stats.rx_trunc2: 69707 dev.cc.1.stats.rx_trunc1: 0 dev.cc.1.stats.rx_trunc0: 0 dev.cc.1.stats.rx_ovflow3: 0 dev.cc.1.stats.rx_ovflow2: 8255902 dev.cc.1.stats.rx_ovflow1: 0 dev.cc.1.stats.rx_ovflow0: 0 dev.cc.1.stats.rx_trunc3: 0 dev.cc.1.stats.rx_trunc2: 101592 dev.cc.1.stats.rx_trunc1: 0 dev.cc.1.stats.rx_trunc0: 0 dev.cc.1.stats.rx_ovflow3: 0 dev.cc.1.stats.rx_ovflow2: 12901089 dev.cc.1.stats.rx_ovflow1: 0 dev.cc.1.stats.rx_ovflow0: 0 The same test across cc0 the "good" port sysctl -a dev.cc.0.stats | egrep "flo|tru" dev.cc.0.stats.rx_trunc3: 0 dev.cc.0.stats.rx_trunc2: 0 dev.cc.0.stats.rx_trunc1: 0 dev.cc.0.stats.rx_trunc0: 0 dev.cc.0.stats.rx_ovflow3: 0 dev.cc.0.stats.rx_ovflow2: 0 dev.cc.0.stats.rx_ovflow1: 0 dev.cc.0.stats.rx_ovflow0: 0 ---Mikehelp
Want to link to this message? Use this
URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1e31a376-6b00-451a-ba1c-b7cf88bac999>
