From owner-freebsd-hackers@freebsd.org Thu Apr 18 00:02:17 2019 Return-Path: Delivered-To: freebsd-hackers@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id B2B9E157D5A2 for ; Thu, 18 Apr 2019 00:02:17 +0000 (UTC) (envelope-from eugen@grosbein.net) Received: from eg.sd.rdtc.ru (eg.sd.rdtc.ru [IPv6:2a03:3100:c:13::5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "eg.sd.rdtc.ru", Issuer "eg.sd.rdtc.ru" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 66CA76AA15 for ; Thu, 18 Apr 2019 00:02:16 +0000 (UTC) (envelope-from eugen@grosbein.net) X-Envelope-From: eugen@grosbein.net X-Envelope-To: freebsd-hackers@freebsd.org Received: from [10.58.0.4] ([10.58.0.4]) by eg.sd.rdtc.ru (8.15.2/8.15.2) with ESMTPS id x3I02EYs001451 (version=TLSv1.2 cipher=DHE-RSA-AES128-SHA bits=128 verify=NOT); Thu, 18 Apr 2019 07:02:14 +0700 (+07) (envelope-from eugen@grosbein.net) Subject: Re: openvpn and system overhead To: Wojciech Puchar , freebsd-hackers@freebsd.org References: <0cc6e0ac-a9a6-a462-3a1e-bfccfd41e138@grosbein.net> From: Eugene Grosbein Message-ID: <2a380079-1f72-5f1e-30ff-ddd808d2862b@grosbein.net> Date: Thu, 18 Apr 2019 07:02:07 +0700 User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0 MIME-Version: 1.0 In-Reply-To: <0cc6e0ac-a9a6-a462-3a1e-bfccfd41e138@grosbein.net> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 66CA76AA15 X-Spamd-Bar: - Authentication-Results: mx1.freebsd.org; spf=permerror (mx1.freebsd.org: domain of eugen@grosbein.net uses mechanism not recognized by this client) smtp.mailfrom=eugen@grosbein.net X-Spamd-Result: default: False [-1.74 / 15.00]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-0.83)[-0.825,0]; MX_INVALID(0.50)[cached]; FROM_HAS_DN(0.00)[]; TO_DN_SOME(0.00)[]; NEURAL_HAM_LONG(-0.94)[-0.936,0]; MIME_GOOD(-0.10)[text/plain]; DMARC_NA(0.00)[grosbein.net]; TO_MATCH_ENVRCPT_SOME(0.00)[]; R_SPF_PERMFAIL(0.00)[]; RCPT_COUNT_TWO(0.00)[2]; NEURAL_HAM_SHORT(-0.38)[-0.377,0]; IP_SCORE(0.00)[country: RU(0.00)]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:29072, ipnet:2a03:3100::/32, country:RU]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_TLS_ALL(0.00)[]; RCVD_COUNT_TWO(0.00)[2] X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 18 Apr 2019 00:02:17 -0000 18.04.2019 6:11, Eugene Grosbein wrote: > 17.04.2019 22:08, Wojciech Puchar wrote: > >> i'm running openvpn server on Xeon E5 2620 server. >> >> when receiving 100Mbit/s traffic over VPN it uses 20% of single core. >> At least 75% of it is system time. >> >> Seems like 500Mbit/s is a max for a single openvpn process. >> >> can anything be done about that to improve performance? > > Anyone concerning performance should stop using solutions processing payload traffic > with userland daemon while still using common system network interfaces > because of unavoidable and big overhead due to constant context switching > from user land to kernel land and back. Be it openvpn or another userland daemon. > > You need either some netmap-based solution or kernel-side vpn like IPsec (maybe with l2tp). > For me, IKE daemon plus net/mpd5 work just fine. mpd5 is userland daemon too, > but it processes only signalling traffic like session establishment packets > and then it setups kernel structures (netgraph nodes) so that payload traffic is processed in-kernel only. Just to clarify: mpd5 still uses common networking stack and system interfaces ng0, ng1 etc. for p2p-tunnels but it does not process tunneled traffic leaving the job to the kernel. Back in 2011 I did some measures of my production mpd5 installation serving real PPPoE users with FreeBSD 8 and mpd5, SuperMicro SuperServer 5016T-MTFB, Intel Xeon CPU E5507 @ 2.27GHz (4 cores) and four 1GB Intel NICs (two on-board 82574L em0/em1 and single two-ports 82576 card igb0/igb1). It processes 1812 simultaneous sessions when each CPU core had 90% load (2% user time and 88% system time approx.) forwarding 1184.9Mbit/s to users plus 830.9Mbit/s from users (2015.8Mbit/s in total) and dealing with 136.2Kpps in+120.3Kpps out for lagg0 (em0+em1, IP-only uplink) and 139.0Kpps in+154.3Kpps out for lagg1 (igb0+igb1, downlink with PPPoE/vlans and no IP). It processed 102K interrupts per second then. There was no encryption involved and numbers basically describe packet forwarding/filtering/shaping ability of the kernel those days.