Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 24 Aug 2022 20:36:35 -0400
From:      mike tancsa <mike@sentex.net>
To:        Tomoaki AOKI <junchoon@dec.sakura.ne.jp>
Cc:        <mike.jakubik@swiftsmsgateway.com>, "pieper, jeffrey e" <jeffrey.e.pieper@intel.com>, jim king <jim@jimking.net>, <stable@freebsd.org>, <kbowling@freebsd.org>
Subject:   Re: igc problems with heavy traffic
Message-ID:  <182d26dea38.27dc.e68d32c7521a042b3773fe36a0156dc7@sentex.net>
In-Reply-To: <20220825093024.60cf0c6d026644bb83036665@dec.sakura.ne.jp>
References:  <fc256428-3ff1-68ba-cfcc-a00ca427e85b@jimking.net> <59b9cec0-d8c2-ce72-b5e9-99d1a1e807f8@sentex.net> <e714cd76-0aaa-3ea0-3c31-5e61badffa18@sentex.net> <86995d10-af63-d053-972e-dd233029f3bf@jimking.net> <3d874f65-8ce2-8f06-f19a-14cd550166e3@sentex.net> <a8192d60-2970-edb5-ce1a-c17ea875bf07@jimking.net> <fd1e825b-c306-64b1-f9ef-fec0344a9c95@sentex.net> <a4ddc96a-3dd5-4fee-8003-05f228d10858@jimking.net> <MW4PR11MB5890493674ADD1757BB47075D0659@MW4PR11MB5890.namprd11.prod.outlook.com> <a9935ba0-9cb2-5a41-ca73-b6962fef5e4d@sentex.net> <879b9239-2b9a-f0ae-4173-4a226c84cd85@sentex.net> <182d22a6c6d.1119560c11283607.2998737705092721009@swiftsmsgateway.com> <2fa9c9d7-1eb9-e7ad-1c19-a6202ac7082b@sentex.net> <20220825093024.60cf0c6d026644bb83036665@dec.sakura.ne.jp>

next in thread | previous in thread | raw e-mail | index | archive | help
Thanks, it was a straight through cat6. It works (with the reported 
occasional link drops) at autoneg, but if I specify 1g it fails to pass traffic

On August 24, 2022 8:31:41 p.m. Tomoaki AOKI <junchoon@dec.sakura.ne.jp> wrote:

> On Wed, 24 Aug 2022 19:37:43 -0400
> mike tancsa <mike@sentex.net> wrote:
>
>> On 8/24/2022 7:22 PM, Mike Jakubik wrote:
>> > What kind of HW are you running on? Im assuming some sort of fairly
>> > modern x86 CPU with at least 4 cores.. Is it multiple CPUs with Numa
>> > nodes perhaps? In any case, if you are testing with iperf3, try using
>> > cpuset on iperf3 to bind it to specific cores. I had a performance
>> > issue on a modern Epyc server with a Mellanox 25Gb card. It turns out
>> > the issue was with the scheduler and how it was bouncing the processes
>> > around diff cores/CPU caches. See "Poor performance with stable/13 and
>> > Mellanox ConnectX-6 (mlx5)" on the freebsd-net mailing list for details.
>> >
>> > P.S. I also use a number of igc (Intel i225 @ 2.5Gb) cards at home and
>> > have had no issues with them.
>> >
>> >
>> Hi,
>>
>>      Performance is excellent. Its just the random link drops thats at
>> issue.  With default settings, running iperf3 on back to back NICs via
>> xover takes a good 20-45min before the link drop. If anything, I am
>> surprised at how much traffic these small devices can forward.  IPSEC
>> especially is super fast on RELENG_13. The link drops seem to be always
>> on the sender.  With fc disabled, reducing the link speed to 1G seems to
>> make the issue go away, or at least its not happening in overnight
>> testing. Its a Celeron N5105. https ://
>> www.aliexpress.com/item/1005003990581434.html
>>
>> Also, if you hook a couple back to back via xover cable, are you able to
>> manually set the speed to 1G and pass traffic ? It doesnt work for me.
>>
>>      ---Mike
>
> FYI:
>  https://en.wikipedia.org/wiki/Medium-dependent_interface
>
> Maybe you should use straight-through cable for 1G or faster.
>
>
> --
> Tomoaki AOKI    <junchoon@dec.sakura.ne.jp>






Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?182d26dea38.27dc.e68d32c7521a042b3773fe36a0156dc7>