Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 12 Aug 2022 07:08:15 +0000
From:      Benoit Chesneau <benoitc@enki-multimedia.eu>
To:        Eric Joyner <ricera10@gmail.com>, Santiago Martinez <sm@codenetworks.net>
Cc:        "freebsd-net@FreeBSD.org" <freebsd-net@freebsd.org>, Michael Dexter <editor@callfortesting.org>
Subject:   Re: 25/100 G performance on freebsd
Message-ID:  <Lcxl7PREc5yC_vCilxG1-hrCPwiIcNtSIvxi2CV6YyqU-GNaa0NH_WV38CfxFBVYYvS3MbhPpsYx7E0FpkdK9iz93DtoPydE95iwQU95F_I=@enki-multimedia.eu>
In-Reply-To: <CA%2Bb0zg8n5n1tc3Uqqj%2BwFrd8rd4ye3SMvayocxAxwFrXuxCeew@mail.gmail.com>
References:  <PK-t3XGZbrHHDgmV_l5kcpPk_2vXVFRijVzpcBtEJd3UWc3iFs7ygJKiHXFAVTaWg5botdaiI85UJdmjxKV268xTH-xf89igEf7axDGqYmc=@enki-multimedia.eu> <2f362689-0feb-bd41-93b2-afb46b4a4a08@codenetworks.net> <CA%2Bb0zg8n5n1tc3Uqqj%2BwFrd8rd4ye3SMvayocxAxwFrXuxCeew@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help

[-- Attachment #1 --]
Hi,

Yes indeed these 2x25G nics are QLogic . I would like possibly to remove 10G cards though i can try the new driver. The thing is that without sr-iov I doubt i am able to have the full capcity of the card in a vm :/ .

I didn't test valve yet. I thought it wasn't fully stable. But maybe it is?

Benoît

Sent from Proton Mail for iOS

On Fri, Aug 12, 2022 at 02:48, Eric Joyner <ricera10@gmail.com> wrote:

> I think Benoit may be referring to this bug? https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=265207
>
> From that, it looks like his cards are QLogic ones that use the qlnxe driver. As for who would know what to do about that ioctl issue, I don't know. I'm not sure if the driver has an active maintainer.
>
> - Eric
>
> On Wed, Aug 10, 2022 at 2:32 PM Santiago Martinez <sm@codenetworks.net> wrote:
>
>> Hi Benoit, sorry to hear that the SR-IOV still not working on your HW.
>>
>> Have you tested the last patch from Intel?
>>
>> Regarding Bhyve, you can use Vale switches (based on netmap).
>> On my machines, i get around ~33Gbps between VM (same local machine), sometimes going towards 40Gbps... ( These are basic tests with iperf3 and TSO/LRO enabled).
>>
>> @Michael Dexter is working on a document that contains configuration examples and test results for the different network backend available in bhyve.
>>
>> If you need help, let me know and we can set up a call.
>> Take care.
>> Santi
>>
>> On 8/8/22 08:57, Benoit Chesneau wrote:
>>
>>> For some reasons. I can’t use SR-IOV on my freebsd machines (HPE DL160 gen10) with latest 25G HPE branded cards. I opened tickets for that but since then no move happened.
>>>
>>> So I wonder id there is a good setup to use these cards with the virtualization. Which kind of performance should I expect using if_bridge? What if i am doing L3 routing instead using epair or tap (for bhyve). Would it work better?
>>>
>>> Any hint is welcome,
>>>
>>> Benoît
[-- Attachment #2 --]
<html><head></head><body>   <div>Hi,</div><div><br></div><div>Yes indeed these 2x25G nics&nbsp;are QLogic . I would like possibly to remove 10G cards though i can try the new driver. The thing is that without sr-iov I doubt i am able to have the full capcity of the card in a vm :/ .&nbsp;</div><div><br></div><div>I didn't test valve yet. I thought it wasn't fully stable. But maybe it is?</div><div><br></div><div>Benoît<caret></caret></div><div><br></div> <div id="protonmail_mobile_signature_block"><div>Sent from Proton Mail for iOS</div></div> <div><br></div><div><br></div>On Fri, Aug 12, 2022 at 02:48, Eric Joyner &lt;<a href="mailto:ricera10@gmail.com" class="">ricera10@gmail.com</a>&gt; wrote:<blockquote class="protonmail_quote" type="cite">  <div dir="ltr">I think Benoit may be referring to this bug? <a href="https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=265207">https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=265207</a><div><br></div><div>From that, it looks like his cards are QLogic ones that use the qlnxe driver. As for who would know what to do about that ioctl issue, I don't know. I'm not sure if the driver has an active maintainer.</div><div><br></div><div>- Eric</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Aug 10, 2022 at 2:32 PM Santiago Martinez &lt;<a href="mailto:sm@codenetworks.net">sm@codenetworks.net</a>&gt; wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">



  <div>
    Hi Benoit, sorry to hear that the SR-IOV still not working on your
    HW. <br>
    <br>
    Have you tested the last patch from Intel? <br>
    <br>
    Regarding Bhyve, you can use Vale switches (based on netmap). <br>
    On my machines, i get around ~33Gbps between VM (same local
    machine), sometimes going towards 40Gbps... ( These are basic tests
    with iperf3 and TSO/LRO enabled).<br>
    <br>
    @Michael Dexter is working on a document that contains configuration
    examples and test results for the different network backend
    available in bhyve.<br>
    <br>
    If you need help, let me know and we can set up a call.<br>
    Take care.<br>
    Santi
    <div><br>
    </div>
    <div>On 8/8/22 08:57, Benoit Chesneau wrote:<br>
    </div>
    <blockquote type="cite">

      <div>For some reasons. <u></u><u></u>I can’t use SR-IOV on my
        freebsd machines  (HPE DL160 gen10) with latest 25G HPE branded
        cards. I opened tickets for that but since then no move
        happened.</div>
      <div><br>
      </div>
      <div>So I wonder id there is a good setup to use these cards with
        the virtualization. Which kind of performance should I expect
        using if_bridge? What if i am doing L3 routing instead using
        epair or tap (for bhyve). Would it work better?</div>
      <div><br>
      </div>
      <div>Any hint is welcome,</div>
      <div><br>
      </div>
      <div>Benoît</div>
      <div><br>
      </div>
    </blockquote>
  </div>

</blockquote></div>
</blockquote></body></html>

Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Lcxl7PREc5yC_vCilxG1-hrCPwiIcNtSIvxi2CV6YyqU-GNaa0NH_WV38CfxFBVYYvS3MbhPpsYx7E0FpkdK9iz93DtoPydE95iwQU95F_I=>