Date: Mon, 15 Aug 2022 12:52:38 +0200 From: Santiago Martinez <sm@codenetworks.net> To: Benoit Chesneau <benoitc@enki-multimedia.eu>, "freebsd-net@FreeBSD.org" <freebsd-net@FreeBSD.org>, Michael Dexter <editor@callfortesting.org> Subject: Re: 25/100 G performance on freebsd Message-ID: <f8c11d10-c15c-9c1e-e9fd-eea922250391@codenetworks.net> In-Reply-To: <AkeC1lFtRXZ0jmpDIH9sku4ziVstLtRr5TYObntyoPNwPwTowG2o62GaEnjB0Ytkkpx0pYyBDJmBilwgwG6LtL4mZcworq00TEswBr8i9uE=@enki-multimedia.eu> References: <PK-t3XGZbrHHDgmV_l5kcpPk_2vXVFRijVzpcBtEJd3UWc3iFs7ygJKiHXFAVTaWg5botdaiI85UJdmjxKV268xTH-xf89igEf7axDGqYmc=@enki-multimedia.eu> <2f362689-0feb-bd41-93b2-afb46b4a4a08@codenetworks.net> <AkeC1lFtRXZ0jmpDIH9sku4ziVstLtRr5TYObntyoPNwPwTowG2o62GaEnjB0Ytkkpx0pYyBDJmBilwgwG6LtL4mZcworq00TEswBr8i9uE=@enki-multimedia.eu>
next in thread | previous in thread | raw e-mail | index | archive | help
This is a multi-part message in MIME format.
--------------YpV97TbqJ2YsOr80kZyWwDqe
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Hi Benoit,
Not sure what the environment, is this to host VNF? those 2x25 will be
both forwardings or are active/standby).
In my case I use:
* Vale for Inter-VM inside the same host.
* Vale to connect to the external network ( hence a phy
interface). In my case Intel 40G NICS.
* SR-IOV for some specific use cases (for example, BNG stress
test tools running on Linux).
For JAILS:
* I tend to use just VNET. Can't get more than 7.2Gbps ( >1400b)
from an epair without a bridge in the middle.
* Right now I'm doing some tests with RSS enabled, but is not
looking good, actually no passing traffic...
If your NICs start to play nice with SR-IOV you can pass a VF to the
Jail, some NICs allow creating L2 "high speed" switches in the card (
never used one).
Regarding L3 (in-kernel), the overhead will be bigger than using vale,
but then you can leverage multi-path, VXLAN termination, IPFW, PF,
dummynet, etc.
Hope it makes sense.
Santi
On 8/13/22 11:20, Benoit Chesneau wrote:
>
> Santiago thanks for the help.
>
> I am curious about your vale setup. Do you have only internal bridges?
> Do you bridge the NIC interface or are doing L3?
>
> Afaik i am trying to dind what would be the most efficient way to use
> the 25GB interfaces whle isolating the services on them. I very
> hesitant of the approach and unsure if freebsd these days can fit the
> bill:
>
> * run isolated services over the 2x25G . would jails limit the bandwith?
> * possibly run bhyve services when linux or else is needed .
>
> Would using only L3 routing solve some performances issues?
>
>
> benoit
>
>
> On Wed, Aug 10, 2022 at 23:31, Santiago Martinez <sm@codenetworks.net>
> wrote:
>> Hi Benoit, sorry to hear that the SR-IOV still not working on your HW.
>>
>> Have you tested the last patch from Intel?
>>
>> Regarding Bhyve, you can use Vale switches (based on netmap).
>> On my machines, i get around ~33Gbps between VM (same local machine),
>> sometimes going towards 40Gbps... ( These are basic tests with iperf3
>> and TSO/LRO enabled).
>>
>> @Michael Dexter is working on a document that contains configuration
>> examples and test results for the different network backend available
>> in bhyve.
>>
>> If you need help, let me know and we can set up a call.
>> Take care.
>> Santi
>>
>> On 8/8/22 08:57, Benoit Chesneau wrote:
>>> For some reasons. I can’t use SR-IOV on my freebsd machines (HPE
>>> DL160 gen10) with latest 25G HPE branded cards. I opened tickets for
>>> that but since then no move happened.
>>>
>>> So I wonder id there is a good setup to use these cards with the
>>> virtualization. Which kind of performance should I expect using
>>> if_bridge? What if i am doing L3 routing instead using epair or tap
>>> (for bhyve). Would it work better?
>>>
>>> Any hint is welcome,
>>>
>>> Benoît
>>>
--------------YpV97TbqJ2YsOr80kZyWwDqe
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<p>Hi Benoit, </p>
<p>Not sure what the environment, is this to host VNF? those 2x25
will be both forwardings or are active/standby).</p>
<p>In my case I use:</p>
<p> * Vale for Inter-VM inside the same host.</p>
<p> * Vale to connect to the external network ( hence a phy
interface). In my case Intel 40G NICS.</p>
<p> * SR-IOV for some specific use cases (for example, BNG
stress test tools running on Linux).</p>
<p><br>
For JAILS:</p>
<p> * I tend to use just VNET. Can't get more than 7.2Gbps (
>1400b) from an epair without a bridge in the middle.</p>
<p> * Right now I'm doing some tests with RSS enabled, but is
not looking good, actually no passing traffic... </p>
<p><br>
If your NICs start to play nice with SR-IOV you can pass a VF to
the Jail, some NICs allow creating L2 "high speed" switches in the
card ( never used one).</p>
<p>Regarding L3 (in-kernel), the overhead will be bigger than using
vale, but then you can leverage multi-path, VXLAN termination,
IPFW, PF, dummynet, etc.<br>
</p>
<p>Hope it makes sense.</p>
<p>Santi</p>
<p><br>
</p>
<div class="moz-cite-prefix">On 8/13/22 11:20, Benoit Chesneau
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:AkeC1lFtRXZ0jmpDIH9sku4ziVstLtRr5TYObntyoPNwPwTowG2o62GaEnjB0Ytkkpx0pYyBDJmBilwgwG6LtL4mZcworq00TEswBr8i9uE=@enki-multimedia.eu">
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<div><br>
</div>
Santiago thanks for the help.
<div><br>
</div>
<div>I am curious about your vale setup. Do you have only internal
bridges? Do you bridge the NIC interface or are doing L3? </div>
<div><br>
</div>
<div>Afaik i am trying to dind what would be the most efficient
way to use the 25GB interfaces whle isolating the services on
them. I very hesitant of the approach and unsure if freebsd
these days can fit the bill:</div>
<div><br>
</div>
<div>* run isolated services over the 2x25G . would jails limit
the bandwith? </div>
<div>* possibly run bhyve services when linux or else is needed
. </div>
<div><br>
</div>
<div>Would using only L3 routing solve some performances issues?<caret></caret> </div>
<div><br>
</div>
<div><br>
</div>
<div>benoit<br>
<div><br>
</div>
<div><br>
</div>
On Wed, Aug 10, 2022 at 23:31, Santiago Martinez <<a
href="mailto:sm@codenetworks.net"
class="moz-txt-link-freetext" moz-do-not-send="true">sm@codenetworks.net</a>>
wrote:
<blockquote class="protonmail_quote" type="cite"> Hi Benoit,
sorry to hear that the SR-IOV still not working on your HW. <br>
<br>
Have you tested the last patch from Intel? <br>
<br>
Regarding Bhyve, you can use Vale switches (based on netmap).
<br>
On my machines, i get around ~33Gbps between VM (same local
machine), sometimes going towards 40Gbps... ( These are basic
tests with iperf3 and TSO/LRO enabled).<br>
<br>
@Michael Dexter is working on a document that contains
configuration examples and test results for the different
network backend available in bhyve.<br>
<br>
If you need help, let me know and we can set up a call.<br>
Take care.<br>
Santi
<div class="moz-cite-prefix"><br>
</div>
<div class="moz-cite-prefix">On 8/8/22 08:57, Benoit Chesneau
wrote:<br>
</div>
<blockquote type="cite">
<div>For some reasons. I can’t use SR-IOV on my freebsd
machines (HPE DL160 gen10) with latest 25G HPE branded
cards. I opened tickets for that but since then no move
happened.</div>
<div><br>
</div>
<div>So I wonder id there is a good setup to use these cards
with the virtualization. Which kind of performance should
I expect using if_bridge? What if i am doing L3 routing
instead using epair or tap (for bhyve). Would it work
better?</div>
<div><br>
</div>
<div>Any hint is welcome,</div>
<div><br>
</div>
<div>Benoît</div>
<div><br>
</div>
</blockquote>
</blockquote>
</div>
</blockquote>
</body>
</html>
--------------YpV97TbqJ2YsOr80kZyWwDqe--
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?f8c11d10-c15c-9c1e-e9fd-eea922250391>
