Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 21 Apr 2020 14:51:07 +0200
From:      Jan Bramkamp <crest@rlwinm.de>
To:        freebsd-current@freebsd.org
Subject:   Re: CFT: if_bridge performance improvements
Message-ID:  <c6fdbfc0-21b7-726c-e7e8-0fdfe843d5d9@rlwinm.de>
In-Reply-To: <CAAoTqfvKcgX8nMMZh3V3g_KUy3iwAmgBt%2BMFKfq_HOkYXMiFhw@mail.gmail.com>
References:  <5377E42E-4C01-4BCC-B934-011AC3448B54@FreeBSD.org> <CAAoTqfvKcgX8nMMZh3V3g_KUy3iwAmgBt%2BMFKfq_HOkYXMiFhw@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On 16.04.20 08:34, Pavel Timofeev wrote:

> вт, 14 апр. 2020 г., 12:51 Kristof Provost <kp@freebsd.org>:
>
>> Hi,
>>
>> Thanks to support from The FreeBSD Foundation I’ve been able to work
>> on improving the throughput of if_bridge.
>> It changes the (data path) locking to use the NET_EPOCH infrastructure.
>> Benchmarking shows substantial improvements (x5 in test setups).
>>
>> This work is ready for wider testing now.
>>
>> It’s under review here: https://reviews.freebsd.org/D24250
>>
>> Patch for CURRENT: https://reviews.freebsd.org/D24250?download=true
>> Patches for stable/12:
>> https://people.freebsd.org/~kp/if_bridge/stable_12/
>>
>> I’m not currently aware of any panics or issues resulting from these
>> patches.
>>
>> Do note that if you run a Bhyve + tap on bridges setup the tap code
>> suffers from a similar bottleneck and you will likely not see major
>> improvements in single VM to host throughput. I would expect, but have
>> not tested, improvements in overall throughput (i.e. when multiple VMs
>> send traffic at the same time).
>>
>> Best regards,
>> Kristof
>>
> Hi!
> Thank you for your work!
> Do you know if epair suffers from the same issue as tap?

As Kirstof Provost said if_epair locks has per CPU locks, but a problem 
exists a layer about the epair driver. At leas on FreeBSD 12.0 and 12.1 
all the packet processing happens in a single netisr thread that becomes 
CPU bound and limits how fast useful traffic can move through epair 
interfaces. Afaik TCP doesn't benifit from multiple netisr threads, but 
unorderer protocols (e.g. UDP) could profit from multiple threads.


I have only tested with iperf (using multiple connections) between the 
FreeBSD 12.x host and a vnet enabled jail connected via an epair 
interface and maxed out at about 1-2Gb/s depending on the CPUs single 
threaded throughput.




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?c6fdbfc0-21b7-726c-e7e8-0fdfe843d5d9>