Date: Fri, 24 Apr 2015 02:10:25 +0300 From: "Andrey V. Elsukov" <bu7cher@yandex.ru> To: Sydney Meyer <meyer.sydney@googlemail.com>, freebsd-net@freebsd.org Cc: "Robert N. M. Watson" <rwatson@FreeBSD.org>, George Neville-Neil <gnn@FreeBSD.org> Subject: Re: IPSec Performance under Xen Message-ID: <55397BE1.7090403@yandex.ru> In-Reply-To: <CF189888-FD6B-4407-8360-56206D49DD6D@gmail.com> References: <CF189888-FD6B-4407-8360-56206D49DD6D@gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On 24.04.2015 01:00, Sydney Meyer wrote: > Hello, > > I have set up 2 VM's under Xen running each one IPSec-Endpoint. > Everything seems to work fine, but (measured with benchmarks/iperf) > the performance drops from ~10 Gb/s on a non-IPSec-Kernel to ~200 > Mb/s with IPSec compiled in, regardless of whether actually using > IPSec or not. > > I have read about reasoning why IPSec isn't enabled in GENERIC, but > wanted to ask if this is the kind of performance hit one has to > expect. Hi, I have a guess. Since you use iperf, I think the main bottleneck there is the fact, that the socket has a PCB. When you have compiled IPSEC in the kernel, it enables the code, that does initialization of PCB's security policy inp_sp via ipsec_init_policy(). Then each packet that has associated PCB (iperf uses sockets, so it has PCB) on output goes through a bunch of checks, that includes several lookups with taking of exclusive locks. Even if you don't use any security policies ALL packets that have associated PCB will go through such UNNEEDED checks. I am not very familiar with this code, but maybe George or Robert can answer why we do this for every PCB? Why not initialize inp_sp only when application does needed configuration via setsockopt(IP_IPSEC_POLICY)? -- WBR, Andrey V. Elsukov
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?55397BE1.7090403>