Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 24 Aug 2022 19:37:43 -0400
From:      mike tancsa <mike@sentex.net>
To:        mike.jakubik@swiftsmsgateway.com
Cc:        "pieper, jeffrey e" <jeffrey.e.pieper@intel.com>, jim king <jim@jimking.net>, "stable@freebsd.org" <stable@freebsd.org>, "kbowling@freebsd.org" <kbowling@freebsd.org>
Subject:   Re: igc problems with heavy traffic
Message-ID:  <2fa9c9d7-1eb9-e7ad-1c19-a6202ac7082b@sentex.net>
In-Reply-To: <182d22a6c6d.1119560c11283607.2998737705092721009@swiftsmsgateway.com>
References:  <fc256428-3ff1-68ba-cfcc-a00ca427e85b@jimking.net> <59b9cec0-d8c2-ce72-b5e9-99d1a1e807f8@sentex.net> <e714cd76-0aaa-3ea0-3c31-5e61badffa18@sentex.net> <86995d10-af63-d053-972e-dd233029f3bf@jimking.net> <3d874f65-8ce2-8f06-f19a-14cd550166e3@sentex.net> <a8192d60-2970-edb5-ce1a-c17ea875bf07@jimking.net> <fd1e825b-c306-64b1-f9ef-fec0344a9c95@sentex.net> <a4ddc96a-3dd5-4fee-8003-05f228d10858@jimking.net> <MW4PR11MB5890493674ADD1757BB47075D0659@MW4PR11MB5890.namprd11.prod.outlook.com> <a9935ba0-9cb2-5a41-ca73-b6962fef5e4d@sentex.net> <879b9239-2b9a-f0ae-4173-4a226c84cd85@sentex.net> <182d22a6c6d.1119560c11283607.2998737705092721009@swiftsmsgateway.com>

next in thread | previous in thread | raw e-mail | index | archive | help
This is a multi-part message in MIME format.
--------------ldD92l6s0LM009C2pxGK06oA
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 8/24/2022 7:22 PM, Mike Jakubik wrote:
> What kind of HW are you running on? Im assuming some sort of fairly 
> modern x86 CPU with at least 4 cores.. Is it multiple CPUs with Numa 
> nodes perhaps? In any case, if you are testing with iperf3, try using 
> cpuset on iperf3 to bind it to specific cores. I had a performance 
> issue on a modern Epyc server with a Mellanox 25Gb card. It turns out 
> the issue was with the scheduler and how it was bouncing the processes 
> around diff cores/CPU caches. See "Poor performance with stable/13 and 
> Mellanox ConnectX-6 (mlx5)" on the freebsd-net mailing list for details.
>
> P.S. I also use a number of igc (Intel i225 @ 2.5Gb) cards at home and 
> have had no issues with them.
>
>
Hi,

     Performance is excellent. Its just the random link drops thats at 
issue.  With default settings, running iperf3 on back to back NICs via 
xover takes a good 20-45min before the link drop. If anything, I am 
surprised at how much traffic these small devices can forward.  IPSEC 
especially is super fast on RELENG_13. The link drops seem to be always 
on the sender.  With fc disabled, reducing the link speed to 1G seems to 
make the issue go away, or at least its not happening in overnight 
testing. Its a Celeron N5105. https :// 
www.aliexpress.com/item/1005003990581434.html

Also, if you hook a couple back to back via xover cable, are you able to 
manually set the speed to 1G and pass traffic ? It doesnt work for me.

     ---Mike

--------------ldD92l6s0LM009C2pxGK06oA
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 8bit

<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <div class="moz-cite-prefix">On 8/24/2022 7:22 PM, Mike Jakubik
      wrote:<br>
    </div>
    <blockquote type="cite"
cite="mid:182d22a6c6d.1119560c11283607.2998737705092721009@swiftsmsgateway.com">
      <meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
      <div style="font-family: Verdana, Arial, Helvetica, sans-serif;
        font-size: 10pt;">
        <div>What kind of HW are you running on? Im assuming some sort
          of fairly modern x86 CPU with at least 4 cores.. Is it
          multiple CPUs with Numa nodes perhaps? In any case, if you are
          testing with iperf3, try using cpuset on iperf3 to bind it to
          specific cores. I had a performance issue on a modern Epyc
          server with a Mellanox 25Gb card. It turns out the issue was
          with the scheduler and how it was bouncing the processes
          around diff cores/CPU caches. See "Poor performance with
          stable/13 and Mellanox ConnectX-6 (mlx5)" on the freebsd-net
          mailing list for details.<br>
        </div>
        <div><br>
        </div>
        <div>P.S. I also use a number of igc (Intel i225 @ 2.5Gb) cards
          at home and have had no issues with them.<br>
        </div>
        <br>
      </div>
      <br>
    </blockquote>
    <p>Hi,</p>
    <p>    Performance is excellent. Its just the random link drops
      thats at issue.  With default settings, running iperf3 on back to
      back NICs via xover takes a good 20-45min before the link drop. 
      If anything, I am surprised at how much traffic these small
      devices can forward.  IPSEC especially is super fast on RELENG_13.
      The link drops seem to be always on the sender.  With fc disabled,
      reducing the link speed to 1G seems to make the issue go away, or
      at least its not happening in overnight testing. Its a Celeron
      N5105. https :// <a class="moz-txt-link-abbreviated" href="http://www.aliexpress.com/item/1005003990581434.html">www.aliexpress.com/item/1005003990581434.html</a></p>;
    <p>Also, if you hook a couple back to back via xover cable, are you
      able to manually set the speed to 1G and pass traffic ? It doesnt
      work for me.<br>
    </p>
    <p>    ---Mike<br>
    </p>
  </body>
</html>

--------------ldD92l6s0LM009C2pxGK06oA--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?2fa9c9d7-1eb9-e7ad-1c19-a6202ac7082b>