Date: Mon, 22 Aug 2022 21:04:48 +0000 From: Benoit Chesneau <benoitc@enki-multimedia.eu> To: Santiago Martinez <sm@codenetworks.net> Cc: "freebsd-net@FreeBSD.org" <freebsd-net@FreeBSD.org>, Michael Dexter <editor@callfortesting.org> Subject: Re: 25/100 G performance on freebsd Message-ID: <qP2TbUmG9Qs4AOAXz9r3nKTjzwkwWSsQcMHGFgS29kCiIRhnWcw5g4-ASMX-VBA_YXoaUD1mbpC37hdJFnt4xzkYC2ksTA-aSN3f1kedsMY=@enki-multimedia.eu> In-Reply-To: <f8c11d10-c15c-9c1e-e9fd-eea922250391@codenetworks.net> References: <PK-t3XGZbrHHDgmV_l5kcpPk_2vXVFRijVzpcBtEJd3UWc3iFs7ygJKiHXFAVTaWg5botdaiI85UJdmjxKV268xTH-xf89igEf7axDGqYmc=@enki-multimedia.eu> <2f362689-0feb-bd41-93b2-afb46b4a4a08@codenetworks.net> <AkeC1lFtRXZ0jmpDIH9sku4ziVstLtRr5TYObntyoPNwPwTowG2o62GaEnjB0Ytkkpx0pYyBDJmBilwgwG6LtL4mZcworq00TEswBr8i9uE=@enki-multimedia.eu> <f8c11d10-c15c-9c1e-e9fd-eea922250391@codenetworks.net>
next in thread | previous in thread | raw e-mail | index | archive | help
[-- Attachment #1 --] For now I didn't choose how to use them but I was thinking to use them as different conduits instead of bonding them since the connection comes via a FO 12. Cards are qlnxe or mlxen cards so I am not sure sr-iov will work unfortunately. Are you using the ports of your cards separately ? Note that I have also hpe intelX722cards on these machines but they are HPE branded and as you know have a buggy behaviour when using sr-iov. For now they are not plugged to the network and I was thinking to drop them from the machine to reduce the poweer usage. Maybe latest update of the driver fixed it but I'm not sure about thta, I would need to try latest from intel but it is not yet ported and my attempt to do it failed :) About vale are you connecting the switch to the network using an epair or vether interface? Benoît Chesneau, Enki Multimedia — t. +33608655490 Sent with [Proton Mail](https://proton.me/) secure email. ------- Original Message ------- On Monday, August 15th, 2022 at 12:52, Santiago Martinez <sm@codenetworks.net> wrote: > Hi Benoit, > > Not sure what the environment, is this to host VNF? those 2x25 will be both forwardings or are active/standby). > > In my case I use: > > * Vale for Inter-VM inside the same host. > > * Vale to connect to the external network ( hence a phy interface). In my case Intel 40G NICS. > > * SR-IOV for some specific use cases (for example, BNG stress test tools running on Linux). > > For JAILS: > > * I tend to use just VNET. Can't get more than 7.2Gbps ( >1400b) from an epair without a bridge in the middle. > > * Right now I'm doing some tests with RSS enabled, but is not looking good, actually no passing traffic... > > If your NICs start to play nice with SR-IOV you can pass a VF to the Jail, some NICs allow creating L2 "high speed" switches in the card ( never used one). > > Regarding L3 (in-kernel), the overhead will be bigger than using vale, but then you can leverage multi-path, VXLAN termination, IPFW, PF, dummynet, etc. > > Hope it makes sense. > > Santi > > On 8/13/22 11:20, Benoit Chesneau wrote: > >> Santiago thanks for the help. >> >> I am curious about your vale setup. Do you have only internal bridges? Do you bridge the NIC interface or are doing L3? >> >> Afaik i am trying to dind what would be the most efficient way to use the 25GB interfaces whle isolating the services on them. I very hesitant of the approach and unsure if freebsd these days can fit the bill: >> >> * run isolated services over the 2x25G . would jails limit the bandwith? >> * possibly run bhyve services when linux or else is needed . >> >> Would using only L3 routing solve some performances issues? >> >> benoit >> >> On Wed, Aug 10, 2022 at 23:31, Santiago Martinez <sm@codenetworks.net> wrote: >> >>> Hi Benoit, sorry to hear that the SR-IOV still not working on your HW. >>> >>> Have you tested the last patch from Intel? >>> >>> Regarding Bhyve, you can use Vale switches (based on netmap). >>> On my machines, i get around ~33Gbps between VM (same local machine), sometimes going towards 40Gbps... ( These are basic tests with iperf3 and TSO/LRO enabled). >>> >>> @Michael Dexter is working on a document that contains configuration examples and test results for the different network backend available in bhyve. >>> >>> If you need help, let me know and we can set up a call. >>> Take care. >>> Santi >>> >>> On 8/8/22 08:57, Benoit Chesneau wrote: >>> >>>> For some reasons. I can’t use SR-IOV on my freebsd machines (HPE DL160 gen10) with latest 25G HPE branded cards. I opened tickets for that but since then no move happened. >>>> >>>> So I wonder id there is a good setup to use these cards with the virtualization. Which kind of performance should I expect using if_bridge? What if i am doing L3 routing instead using epair or tap (for bhyve). Would it work better? >>>> >>>> Any hint is welcome, >>>> >>>> Benoît [-- Attachment #2 --] <div style="font-family: arial; font-size: 14px;"><span style="caret-color:rgb(0, 0, 0);background-color:rgb(255, 255, 255)">For now I didn't choose how to use them but I was thinking to use them as different conduits instead of bonding them since the connection comes via a FO 12. Cards are qlnxe or mlxen cards so I am not sure sr-iov will work unfortunately. Are you using the ports of your cards separately ?</span><div style="caret-color:rgb(0, 0, 0);background-color:rgb(255, 255, 255)"><br></div><div style="caret-color:rgb(0, 0, 0);background-color:rgb(255, 255, 255)">Note that I have also hpe intel<span> </span><span>X722</span>cards on these machines but they are HPE branded and as you know have a buggy behaviour when using sr-iov. For now they are not plugged to the network and I was thinking to drop them from the machine to reduce the poweer usage. Maybe latest update of the driver fixed it but I'm not sure about thta, I would need to try latest from intel but it is not yet ported and my attempt to do it failed :)</div><div style="caret-color:rgb(0, 0, 0);background-color:rgb(255, 255, 255)"><br></div><span style="caret-color:rgb(0, 0, 0);background-color:rgb(255, 255, 255)">About vale are you connecting the switch to the network using an epair or vether interface? </span><br></div><div style="font-family: arial; font-size: 14px;"><br></div> <div class="protonmail_signature_block" style="font-family: arial; font-size: 14px;"> <div class="protonmail_signature_block-user"> <div style="font-style:normal;font-weight:normal;letter-spacing:normal;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none;color:rgb(0,0,0);font-family:Helvetica;font-size:12px;">Benoît Chesneau, Enki Multimedia<br></div><div style="font-style:normal;font-weight:normal;letter-spacing:normal;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none;color:rgb(0,0,0);font-family:Helvetica;font-size:12px;">—<br></div><div style="font-style:normal;font-weight:normal;letter-spacing:normal;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none;color:rgb(0,0,0);font-family:Helvetica;font-size:12px;">t. +33608655490 <br></div> </div> <div style="font-family: arial; font-size: 14px;"><br></div> <div class="protonmail_signature_block-proton"> Sent with <a target="_blank" href="https://proton.me/" rel="noopener noreferrer">Proton Mail</a> secure email. </div> </div> <div style="font-family: arial; font-size: 14px;"><br></div><div class="protonmail_quote"> ------- Original Message -------<br> On Monday, August 15th, 2022 at 12:52, Santiago Martinez <sm@codenetworks.net> wrote:<br><br> <blockquote class="protonmail_quote" type="cite"> <p>Hi Benoit, </p> <p>Not sure what the environment, is this to host VNF? those 2x25 will be both forwardings or are active/standby).</p> <p>In my case I use:</p> <p> * Vale for Inter-VM inside the same host.</p> <p> * Vale to connect to the external network ( hence a phy interface). In my case Intel 40G NICS.</p> <p> * SR-IOV for some specific use cases (for example, BNG stress test tools running on Linux).</p> <p><br> For JAILS:</p> <p> * I tend to use just VNET. Can't get more than 7.2Gbps ( >1400b) from an epair without a bridge in the middle.</p> <p> * Right now I'm doing some tests with RSS enabled, but is not looking good, actually no passing traffic... </p> <p><br> If your NICs start to play nice with SR-IOV you can pass a VF to the Jail, some NICs allow creating L2 "high speed" switches in the card ( never used one).</p> <p>Regarding L3 (in-kernel), the overhead will be bigger than using vale, but then you can leverage multi-path, VXLAN termination, IPFW, PF, dummynet, etc.<br> </p> <p>Hope it makes sense.</p> <p>Santi</p> <p><br> </p> <div class="moz-cite-prefix">On 8/13/22 11:20, Benoit Chesneau wrote:<br> </div> <blockquote type="cite"> <div><br> </div> Santiago thanks for the help. <div><br> </div> <div>I am curious about your vale setup. Do you have only internal bridges? Do you bridge the NIC interface or are doing L3? </div> <div><br> </div> <div>Afaik i am trying to dind what would be the most efficient way to use the 25GB interfaces whle isolating the services on them. I very hesitant of the approach and unsure if freebsd these days can fit the bill:</div> <div><br> </div> <div>* run isolated services over the 2x25G . would jails limit the bandwith? </div> <div>* possibly run bhyve services when linux or else is needed . </div> <div><br> </div> <div>Would using only L3 routing solve some performances issues? </div> <div><br> </div> <div><br> </div> <div>benoit<br> <div><br> </div> <div><br> </div> On Wed, Aug 10, 2022 at 23:31, Santiago Martinez <<a class="moz-txt-link-freetext" href="mailto:sm@codenetworks.net" rel="noreferrer nofollow noopener" target="_blank">sm@codenetworks.net</a>> wrote: <blockquote type="cite" class="protonmail_quote"> Hi Benoit, sorry to hear that the SR-IOV still not working on your HW. <br> <br> Have you tested the last patch from Intel? <br> <br> Regarding Bhyve, you can use Vale switches (based on netmap). <br> On my machines, i get around ~33Gbps between VM (same local machine), sometimes going towards 40Gbps... ( These are basic tests with iperf3 and TSO/LRO enabled).<br> <br> @Michael Dexter is working on a document that contains configuration examples and test results for the different network backend available in bhyve.<br> <br> If you need help, let me know and we can set up a call.<br> Take care.<br> Santi <div class="moz-cite-prefix"><br> </div> <div class="moz-cite-prefix">On 8/8/22 08:57, Benoit Chesneau wrote:<br> </div> <blockquote type="cite"> <div>For some reasons. I can’t use SR-IOV on my freebsd machines (HPE DL160 gen10) with latest 25G HPE branded cards. I opened tickets for that but since then no move happened.</div> <div><br> </div> <div>So I wonder id there is a good setup to use these cards with the virtualization. Which kind of performance should I expect using if_bridge? What if i am doing L3 routing instead using epair or tap (for bhyve). Would it work better?</div> <div><br> </div> <div>Any hint is welcome,</div> <div><br> </div> <div>Benoît</div> <div><br> </div> </blockquote> </blockquote> </div> </blockquote> </blockquote><br> </div>
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?qP2TbUmG9Qs4AOAXz9r3nKTjzwkwWSsQcMHGFgS29kCiIRhnWcw5g4-ASMX-VBA_YXoaUD1mbpC37hdJFnt4xzkYC2ksTA-aSN3f1kedsMY=>
