From owner-freebsd-current@freebsd.org Tue Oct 30 14:59:34 2018 Return-Path: Delivered-To: freebsd-current@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id C1A9510EAD27 for ; Tue, 30 Oct 2018 14:59:34 +0000 (UTC) (envelope-from bzeeb-lists@lists.zabbadoz.net) Received: from mx1.sbone.de (cross.sbone.de [195.201.62.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mx1.sbone.de", Issuer "SBone.DE" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 52E076A01A; Tue, 30 Oct 2018 14:59:33 +0000 (UTC) (envelope-from bzeeb-lists@lists.zabbadoz.net) Received: from mail.sbone.de (mail.sbone.de [IPv6:fde9:577b:c1a9:31::2013:587]) (using TLSv1 with cipher ADH-CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) by mx1.sbone.de (Postfix) with ESMTPS id 56EF18D4A15C; Tue, 30 Oct 2018 14:59:32 +0000 (UTC) Received: from content-filter.sbone.de (content-filter.sbone.de [IPv6:fde9:577b:c1a9:31::2013:2742]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.sbone.de (Postfix) with ESMTPS id 282FDD1F857; Tue, 30 Oct 2018 14:59:31 +0000 (UTC) X-Virus-Scanned: amavisd-new at sbone.de Received: from mail.sbone.de ([IPv6:fde9:577b:c1a9:31::2013:587]) by content-filter.sbone.de (content-filter.sbone.de [fde9:577b:c1a9:31::2013:2742]) (amavisd-new, port 10024) with ESMTP id r0JAxJ8JjgNG; Tue, 30 Oct 2018 14:59:27 +0000 (UTC) Received: from [10.248.99.65] (fresh-ayiya.sbone.de [IPv6:fde9:577b:c1a9:f001::2]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.sbone.de (Postfix) with ESMTPSA id 0843FD1F856; Tue, 30 Oct 2018 14:59:26 +0000 (UTC) From: "Bjoern A. Zeeb" To: "Rodney W. Grimes" Cc: "Kristof Provost" , "Ernie Luzar" , "FreeBSD current" Subject: Re: 12.0-BETA1 vnet with pf firewall Date: Tue, 30 Oct 2018 14:59:25 +0000 X-Mailer: MailMate (2.0BETAr6125) Message-ID: <9D50D781-73BA-45B0-ADBB-CF01DE587BC5@lists.zabbadoz.net> In-Reply-To: <201810301414.w9UEEK9v061805@pdx.rh.CN85.dnsmgr.net> References: <201810301414.w9UEEK9v061805@pdx.rh.CN85.dnsmgr.net> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 30 Oct 2018 14:59:35 -0000 On 30 Oct 2018, at 14:14, Rodney W. Grimes wrote: >> On 30 Oct 2018, at 14:29, Bjoern A. Zeeb wrote: >>> On 30 Oct 2018, at 12:23, Kristof Provost wrote: >>>> I?m not too familiar with this part of the vnet code, but it looks >>>> to me like we?ve got more per-vnet variables that was originally >>>> anticipated, so we may need to just increase the allocated space. >>> >>> Can you elfdump -a the two modules and see how big their set_vnet >>> section sizes are? I see: >>> >>> pf.ko: sh_size: 6664 >>> ipl.ko: sh_size: 2992 >>> >> I see exactly the same numbers. >> >>> VNET_MODMIN is two pages (8k). So yes, that would exceed the module >>> space. >>> Having 6.6k global variable space is a bit excessive? Where does >>> that >>> come from? multicast used to have a similar problem in the past >>> that >>> it could not be loaded as a module as it had a massive array there >>> and >>> we changed it to be malloced and that reduced it to a pointer. >>> >>> 0000000000000f38 l O set_vnet 0000000000000428 >>> vnet_entry_pfr_nulltable >> That?s a default table. It?s large because it uses MAXPATHLEN for >> the pfrt_anchor string. >> >>> 0000000000000b10 l O set_vnet 00000000000003d0 >>> vnet_entry_pf_default_rule >> Default rule. Rules potentially contain names, tag names, interface >> names, ? so it?s a large structure. >> >>> 0000000000001370 l O set_vnet 0000000000000690 >>> vnet_entry_pf_main_anchor >> Anchors use MAXPATHLEN for the anchor path, so that?s 1024 bytes >> right >> away. >> >>> 0000000000000000 l O set_vnet 0000000000000120 >>> vnet_entry_pf_status >>> >> pf status. Mostly counters. >> >> I?ll see about putting moving those into the heap on my todo list. > > Though that removes the current situation, it is a partial fix, > doesnt this static sized 2 page VNET_MODMIN needs to be fixed in the > longer term? I think about it the other way round: we might want to bump it to 4 pages in short term for 12.0 maybe? The problem is that whether or not you use modules these 2/4 pages will be allocated per-vnet, so if you run 50 vnet jails that’s 100/200 pages. And while people might say memory is cheap, I’ve run 10.000 vnet jails before on a single machine … it adds up. I wonder if we could make it a tunable though.. Let me quickly think about it and come up with a patch. I’ll also go and see to get better error reporting into the link_elf*.c files for this case. /bz