Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 18 Apr 2019 07:02:07 +0700
From:      Eugene Grosbein <eugen@grosbein.net>
To:        Wojciech Puchar <wojtek@puchar.net>, freebsd-hackers@freebsd.org
Subject:   Re: openvpn and system overhead
Message-ID:  <2a380079-1f72-5f1e-30ff-ddd808d2862b@grosbein.net>
In-Reply-To: <0cc6e0ac-a9a6-a462-3a1e-bfccfd41e138@grosbein.net>
References:  <alpine.BSF.2.20.1904171707030.87502@puchar.net> <0cc6e0ac-a9a6-a462-3a1e-bfccfd41e138@grosbein.net>

next in thread | previous in thread | raw e-mail | index | archive | help
18.04.2019 6:11, Eugene Grosbein wrote:

> 17.04.2019 22:08, Wojciech Puchar wrote:
> 
>> i'm running openvpn server on Xeon E5 2620 server.
>>
>> when receiving 100Mbit/s traffic over VPN it uses 20% of single core.
>> At least 75% of it is system time.
>>
>> Seems like 500Mbit/s is a max for a single openvpn process.
>>
>> can anything be done about that to improve performance?
> 
> Anyone concerning performance should stop using solutions processing payload traffic
> with userland daemon while still using common system network interfaces
> because of unavoidable and big overhead due to constant context switching
> from user land to kernel land and back. Be it openvpn or another userland daemon.
> 
> You need either some netmap-based solution or kernel-side vpn like IPsec (maybe with l2tp).
> For me, IKE daemon plus net/mpd5 work just fine. mpd5 is userland daemon too,
> but it processes only signalling traffic like session establishment packets
> and then it setups kernel structures (netgraph nodes) so that payload traffic is processed in-kernel only.

Just to clarify: mpd5 still uses common networking stack and system interfaces ng0, ng1 etc. for p2p-tunnels
but it does not process tunneled traffic leaving the job to the kernel.

Back in 2011 I did some measures of my production mpd5 installation serving real PPPoE users
with FreeBSD 8 and mpd5, SuperMicro SuperServer 5016T-MTFB, Intel Xeon CPU E5507 @ 2.27GHz (4 cores)
and four 1GB Intel NICs (two on-board 82574L em0/em1 and single two-ports 82576 card igb0/igb1).

It processes 1812 simultaneous sessions when each CPU core had 90% load (2% user time and 88% system time approx.)
forwarding 1184.9Mbit/s to users plus 830.9Mbit/s from users (2015.8Mbit/s in total)
and dealing with 136.2Kpps in+120.3Kpps out for lagg0 (em0+em1, IP-only uplink) and
139.0Kpps in+154.3Kpps out for lagg1 (igb0+igb1, downlink with PPPoE/vlans and no IP).
It processed 102K interrupts per second then.

There was no encryption involved and numbers basically describe packet forwarding/filtering/shaping
ability of the kernel those days.




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?2a380079-1f72-5f1e-30ff-ddd808d2862b>