Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 22 Apr 2020 01:06:48 +0800
From:      k simon <moremore2@outlook.com>
To:        freebsd-current@freebsd.org
Subject:   Re: CFT: if_bridge performance improvements
Message-ID:  <HK0PR03MB30265A0238713E3F83684897EED50@HK0PR03MB3026.apcprd03.prod.outlook.com>
In-Reply-To: <c6fdbfc0-21b7-726c-e7e8-0fdfe843d5d9@rlwinm.de>
References:  <5377E42E-4C01-4BCC-B934-011AC3448B54@FreeBSD.org> <CAAoTqfvKcgX8nMMZh3V3g_KUy3iwAmgBt%2BMFKfq_HOkYXMiFhw@mail.gmail.com> <c6fdbfc0-21b7-726c-e7e8-0fdfe843d5d9@rlwinm.de>

next in thread | previous in thread | raw e-mail | index | archive | help
Hi,
   Interesting, maybe ng_eiface + if_bridge is a good idea .

Simon
20200422

On 4/21/20 8:51 PM, Jan Bramkamp wrote:
> On 16.04.20 08:34, Pavel Timofeev wrote:
> 
>> вт, 14 апр. 2020 г., 12:51 Kristof Provost <kp@freebsd.org>:
>>
>>> Hi,
>>>
>>> Thanks to support from The FreeBSD Foundation I’ve been able to work
>>> on improving the throughput of if_bridge.
>>> It changes the (data path) locking to use the NET_EPOCH infrastructure.
>>> Benchmarking shows substantial improvements (x5 in test setups).
>>>
>>> This work is ready for wider testing now.
>>>
>>> It’s under review here: https://reviews.freebsd.org/D24250
>>>
>>> Patch for CURRENT: https://reviews.freebsd.org/D24250?download=true
>>> Patches for stable/12:
>>> https://people.freebsd.org/~kp/if_bridge/stable_12/
>>>
>>> I’m not currently aware of any panics or issues resulting from these
>>> patches.
>>>
>>> Do note that if you run a Bhyve + tap on bridges setup the tap code
>>> suffers from a similar bottleneck and you will likely not see major
>>> improvements in single VM to host throughput. I would expect, but have
>>> not tested, improvements in overall throughput (i.e. when multiple VMs
>>> send traffic at the same time).
>>>
>>> Best regards,
>>> Kristof
>>>
>> Hi!
>> Thank you for your work!
>> Do you know if epair suffers from the same issue as tap?
> 
> As Kirstof Provost said if_epair locks has per CPU locks, but a problem 
> exists a layer about the epair driver. At leas on FreeBSD 12.0 and 12.1 
> all the packet processing happens in a single netisr thread that becomes 
> CPU bound and limits how fast useful traffic can move through epair 
> interfaces. Afaik TCP doesn't benifit from multiple netisr threads, but 
> unorderer protocols (e.g. UDP) could profit from multiple threads.
> 
> 
> I have only tested with iperf (using multiple connections) between the 
> FreeBSD 12.x host and a vnet enabled jail connected via an epair 
> interface and maxed out at about 1-2Gb/s depending on the CPUs single 
> threaded throughput.
> 
> _______________________________________________
> freebsd-current@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-current
> To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org"



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?HK0PR03MB30265A0238713E3F83684897EED50>