From owner-freebsd-pf@FreeBSD.ORG Mon Oct 14 00:59:34 2013 Return-Path: Delivered-To: freebsd-pf@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 87C5EAAA for ; Mon, 14 Oct 2013 00:59:34 +0000 (UTC) (envelope-from list_freebsd@bluerosetech.com) Received: from yoshi.bluerosetech.com (yoshi.bluerosetech.com [IPv6:2607:f2f8:a450::66]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 6F1B02078 for ; Mon, 14 Oct 2013 00:59:34 +0000 (UTC) Received: from chombo.houseloki.net (c-76-27-220-79.hsd1.wa.comcast.net [76.27.220.79]) by yoshi.bluerosetech.com (Postfix) with ESMTPSA id D9075E6079; Sun, 13 Oct 2013 17:59:33 -0700 (PDT) Received: from [IPv6:2601:7:1680:365:e014:714:9e41:4c79] (unknown [IPv6:2601:7:1680:365:e014:714:9e41:4c79]) by chombo.houseloki.net (Postfix) with ESMTPSA id A24D2925; Sun, 13 Oct 2013 17:59:31 -0700 (PDT) Message-ID: <525B41EA.8000501@bluerosetech.com> Date: Sun, 13 Oct 2013 17:59:22 -0700 From: Darren Pilgrim User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.0 MIME-Version: 1.0 To: =?UTF-8?B?VXJvxaEgR3J1YmVy?= , freebsd-pf@freebsd.org Subject: Re: PF rule question References: In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-pf@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: "Technical discussion and general questions about packet filter \(pf\)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 14 Oct 2013 00:59:34 -0000 On 10/9/2013 3:54 PM, Uroš Gruber wrote: > Hi, > > I'm strugling to complete my pf firewall configuration with a bit more > optimized rules. > > I have a few hudreds jails set up on network from 172.16.1.0 to 172.16.10.0 > > My goal is to deny access between jails, but allow a few exceptions for > example all jails can connect to jails from 172.16.1.0 to 172.16.1.64. > > I've accomplished this with rules like > > pass on lo0 from $jailnet to 172.16.1.0/26 > pass on lo0 from 172.16.1.1 to 172.16.1.1 > > I would like to know if there is a better way to write such rules mostly > because all that jails are very dynamic in terms of > runing,stoping/destroying etc. and also IP aliases are removed and added > back continuously. Use an anchor for the "pass on lo0 from X to X" rules and a table for the jailnet. Then have your jail provisioning scripts manipulate the table and anchor as jails come up and down. In /etc/pf.conf: table persist pass on lo0 from to 172.16.1.0/26 anchor When bringing up a jail: # pfctl -t jailnet -T add 192.0.2.65 # pfctl -a jails -f - <<<"pass on lo0 from 192.0.2.65 to 192.0.2.65" When taking down a jail: # pfctl -t jailnet -T delete 192.0.2.65 # pfctl -a jails -f - <<<"block on lo0 from 192.0.2.65 to 192.0.2.65" # pfctl -k 192.0.2.65 You'll need to reload the table and anchor rules on a system restart. You can do that with rules in /etc/pf.conf: table persist /path/to/jailnet_address_list load anchor jails from /path/to/jails_rules_list or directly using pfctl: # pfctl -t jailnet -Ta -f /path/to/jailnet_address_list # pfctl -a jails -f /path/to/jails_rules_list From owner-freebsd-pf@FreeBSD.ORG Mon Oct 14 01:02:53 2013 Return-Path: Delivered-To: freebsd-pf@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id BC07AB22 for ; Mon, 14 Oct 2013 01:02:53 +0000 (UTC) (envelope-from rob@logicalhosting.ca) Received: from mail-pd0-f174.google.com (mail-pd0-f174.google.com [209.85.192.174]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 93AB020BB for ; Mon, 14 Oct 2013 01:02:53 +0000 (UTC) Received: by mail-pd0-f174.google.com with SMTP id y13so6704964pdi.19 for ; Sun, 13 Oct 2013 18:02:47 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=AMwCvuW/fQ369BA83ormU83Ny/7RgkWhX4+CqASRKp0=; b=mVO2XNWsBMiKCo6lE67SAXmrNgOjsuVdTSCP1KGfFd7EMkh48j5KUM4VGPQYMu+rAM 7Mi9w2CVf/Dm6Vi4mFCCO3nfbzPzUTYAseH6IzjmsBJHFkpCCyImNAiAzbA3N83egEeR uNnMXFWVu5Du0NhO5Mdl2vbuKnbR8joRo+jIk9ipEA12AzFihQi8uGpxVYndWkcZ5MvH 4vcz5eoZTfDYApfFJA+n8PTW2aJeNWgE2M2Kunqjf1U2R/qUAH+Snf4wcszXq0n7AMoj RzkkM7dd3jMiBrY62wUB7+EYx2ymQRb1/VNJs+1FHmZmCoz9Uv2wtExAf1PcuUmggH+8 xzxA== X-Gm-Message-State: ALoCoQnhvLMTJ+DuLTLX71g6TebJ1CGvZ304JOBOQheygm2WThICUcSWfFezD7PfgF7UARZsaBL1 MIME-Version: 1.0 X-Received: by 10.67.23.164 with SMTP id ib4mr35214238pad.42.1381712567403; Sun, 13 Oct 2013 18:02:47 -0700 (PDT) Received: by 10.68.114.4 with HTTP; Sun, 13 Oct 2013 18:02:47 -0700 (PDT) In-Reply-To: <525B41EA.8000501@bluerosetech.com> References: <525B41EA.8000501@bluerosetech.com> Date: Sun, 13 Oct 2013 19:02:47 -0600 Message-ID: Subject: Re: PF rule question From: Rob Fraser To: Darren Pilgrim Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: freebsd-pf@freebsd.org X-BeenThere: freebsd-pf@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: "Technical discussion and general questions about packet filter \(pf\)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 14 Oct 2013 01:02:53 -0000 would this work ? block in on lo0 from lo0 to lo0 block out on lo0 from lo0 to lo0 On Sun, Oct 13, 2013 at 6:59 PM, Darren Pilgrim < list_freebsd@bluerosetech.com> wrote: > On 10/9/2013 3:54 PM, Uro=C5=A1 Gruber wrote: > >> Hi, >> >> I'm strugling to complete my pf firewall configuration with a bit more >> optimized rules. >> >> I have a few hudreds jails set up on network from 172.16.1.0 to >> 172.16.10.0 >> >> My goal is to deny access between jails, but allow a few exceptions for >> example all jails can connect to jails from 172.16.1.0 to 172.16.1.64. >> >> I've accomplished this with rules like >> >> pass on lo0 from $jailnet to 172.16.1.0/26 >> pass on lo0 from 172.16.1.1 to 172.16.1.1 >> >> I would like to know if there is a better way to write such rules mostly >> because all that jails are very dynamic in terms of >> runing,stoping/destroying etc. and also IP aliases are removed and added >> back continuously. >> > > Use an anchor for the "pass on lo0 from X to X" rules and a table for the > jailnet. Then have your jail provisioning scripts manipulate the table a= nd > anchor as jails come up and down. > > In /etc/pf.conf: > > table persist > pass on lo0 from to 172.16.1.0/26 > anchor > > When bringing up a jail: > > # pfctl -t jailnet -T add 192.0.2.65 > # pfctl -a jails -f - <<<"pass on lo0 from 192.0.2.65 to 192.0.2.65" > > When taking down a jail: > > # pfctl -t jailnet -T delete 192.0.2.65 > # pfctl -a jails -f - <<<"block on lo0 from 192.0.2.65 to 192.0.2.65" > # pfctl -k 192.0.2.65 > > You'll need to reload the table and anchor rules on a system restart. You > can do that with rules in /etc/pf.conf: > > table persist /path/to/jailnet_address_list > load anchor jails from /path/to/jails_rules_list > > or directly using pfctl: > > # pfctl -t jailnet -Ta -f /path/to/jailnet_address_list > # pfctl -a jails -f /path/to/jails_rules_list > > ______________________________**_________________ > freebsd-pf@freebsd.org mailing list > http://lists.freebsd.org/**mailman/listinfo/freebsd-pf > To unsubscribe, send any mail to "freebsd-pf-unsubscribe@**freebsd.org > " > --=20 Rob Fraser rob@logicalhosting.ca www.logicalhosting.ca From owner-freebsd-pf@FreeBSD.ORG Mon Oct 14 01:04:33 2013 Return-Path: Delivered-To: freebsd-pf@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 69216B7C for ; Mon, 14 Oct 2013 01:04:33 +0000 (UTC) (envelope-from list_freebsd@bluerosetech.com) Received: from rush.bluerosetech.com (rush.bluerosetech.com [IPv6:2607:fc50:1000:9b00::25]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 3EB3520CC for ; Mon, 14 Oct 2013 01:04:33 +0000 (UTC) Received: from chombo.houseloki.net (c-76-27-220-79.hsd1.wa.comcast.net [76.27.220.79]) by rush.bluerosetech.com (Postfix) with ESMTPSA id C4B2C11437; Sun, 13 Oct 2013 18:04:22 -0700 (PDT) Received: from [IPv6:2601:7:1680:365:e014:714:9e41:4c79] (unknown [IPv6:2601:7:1680:365:e014:714:9e41:4c79]) by chombo.houseloki.net (Postfix) with ESMTPSA id 1605292B; Sun, 13 Oct 2013 18:04:20 -0700 (PDT) Message-ID: <525B430B.1080401@bluerosetech.com> Date: Sun, 13 Oct 2013 18:04:11 -0700 From: Darren Pilgrim User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.0 MIME-Version: 1.0 To: Rob Fraser Subject: Re: PF rule question References: <525B41EA.8000501@bluerosetech.com> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-pf@freebsd.org X-BeenThere: freebsd-pf@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: "Technical discussion and general questions about packet filter \(pf\)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 14 Oct 2013 01:04:33 -0000 On 10/13/2013 6:02 PM, Rob Fraser wrote: > would this work ? > > block in on lo0 from lo0 to lo0 > block out on lo0 from lo0 to lo0 That reduces to "block on lo0", which you almost certainly do not want on a running system. :) From owner-freebsd-pf@FreeBSD.ORG Mon Oct 14 11:06:53 2013 Return-Path: Delivered-To: freebsd-pf@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 84BCA532 for ; Mon, 14 Oct 2013 11:06:53 +0000 (UTC) (envelope-from owner-bugmaster@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 71F1E2C86 for ; Mon, 14 Oct 2013 11:06:53 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.7/8.14.7) with ESMTP id r9EB6rpQ035297 for ; Mon, 14 Oct 2013 11:06:53 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.7/8.14.7/Submit) id r9EB6rdt035293 for freebsd-pf@FreeBSD.org; Mon, 14 Oct 2013 11:06:53 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 14 Oct 2013 11:06:53 GMT Message-Id: <201310141106.r9EB6rdt035293@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-pf@FreeBSD.org Subject: Current problem reports assigned to freebsd-pf@FreeBSD.org X-BeenThere: freebsd-pf@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: "Technical discussion and general questions about packet filter \(pf\)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 14 Oct 2013 11:06:53 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/182401 pf [pf] pf state for some IPs reaches 4294967295 suspicou o kern/182350 pf [pf] core dump with packet filter -- pf_overlad_task o kern/179392 pf [pf] [ip6] Incorrect TCP checksums in rdr return packe o kern/177810 pf [pf] traffic dropped by accepting rules is not counted o kern/177808 pf [pf] [patch] route-to rule forwarding traffic inspite o kern/176763 pf [pf] [patch] Removing pf Source entries locks kernel. o kern/176268 pf [pf] [patch] synproxy not working with route-to o kern/173659 pf [pf] PF fatal trap on 9.1 (taskq fatal trap on pf_test o bin/172888 pf [patch] authpf(8) feature enhancement o kern/172648 pf [pf] [ip6]: 'scrub reassemble tcp' breaks IPv6 packet o kern/171733 pf [pf] PF problem with modulate state in [regression] o kern/169630 pf [pf] [patch] pf fragment reassembly of padded (undersi o kern/168952 pf [pf] direction scrub rules don't work o kern/168190 pf [pf] panic when using pf and route-to (maybe: bad frag o kern/166336 pf [pf] kern.securelevel 3 +pf reload o kern/165315 pf [pf] States never cleared in PF with DEVICE_POLLING o kern/164402 pf [pf] pf crashes with a particular set of rules when fi o kern/164271 pf [pf] not working pf nat on FreeBSD 9.0 [regression] o kern/163208 pf [pf] PF state key linking mismatch o kern/160370 pf [pf] Incorrect pfctl check of pf.conf o kern/155736 pf [pf] [altq] borrow from parent queue does not work wit o kern/153307 pf [pf] Bug with PF firewall o kern/148290 pf [pf] "sticky-address" option of Packet Filter (PF) blo o kern/148260 pf [pf] [patch] pf rdr incompatible with dummynet o kern/147789 pf [pf] Firewall PF no longer drops connections by sendin o kern/143543 pf [pf] [panic] PF route-to causes kernel panic o bin/143504 pf [patch] outgoing states are not killed by authpf(8) o conf/142961 pf [pf] No way to adjust pidfile in pflogd o conf/142817 pf [patch] etc/rc.d/pf: silence pfctl o kern/141905 pf [pf] [panic] pf kernel panic on 7.2-RELEASE with empty o kern/140697 pf [pf] pf behaviour changes - must be documented o kern/137982 pf [pf] when pf can hit state limits, random IP failures o kern/136781 pf [pf] Packets appear to drop with pf scrub and if_bridg o kern/135948 pf [pf] [gre] pf not natting gre protocol o kern/134996 pf [pf] Anchor tables not included when pfctl(8) is run w o kern/133732 pf [pf] max-src-conn issue o conf/130381 pf [rc.d] [pf] [ip6] ipv6 not fully configured when pf st o kern/127920 pf [pf] ipv6 and synproxy don't play well together o conf/127814 pf [pf] The flush in pf_reload in /etc/rc.d/pf does not w o kern/127121 pf [pf] [patch] pf incorrect log priority o kern/127042 pf [pf] [patch] pf recursion panic if interface group is o kern/125467 pf [pf] pf keep state bug while handling sessions between s kern/124933 pf [pf] [ip6] pf does not support (drops) IPv6 fragmented o kern/122773 pf [pf] pf doesn't log uid or pid when configured to o kern/122014 pf [pf] [panic] FreeBSD 6.2 panic in pf o kern/120281 pf [pf] [request] lost returning packets to PF for a rdr o kern/120057 pf [pf] [patch] Allow proper settings of ALTQ_HFSC. The c o bin/118355 pf [pf] [patch] pfctl(8) help message options order false o kern/114567 pf [pf] [lor] pf_ioctl.c + if.c o kern/103283 pf pfsync fails to sucessfully transfer some sessions o kern/93825 pf [pf] pf reply-to doesn't work o sparc/93530 pf [pf] Incorrect checksums when using pf's route-to on s o kern/92949 pf [pf] PF + ALTQ problems with latency o kern/87074 pf [pf] pf does not log dropped packets when max-* statef a kern/86752 pf [pf] pf does not use default timeouts when reloading c o bin/86635 pf [patch] pfctl(8): allow new page character (^L) in pf. o kern/82271 pf [pf] cbq scheduler cause bad latency 57 problems total. From owner-freebsd-pf@FreeBSD.ORG Mon Oct 14 11:09:29 2013 Return-Path: Delivered-To: freebsd-pf@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id AC68DE1C for ; Mon, 14 Oct 2013 11:09:29 +0000 (UTC) (envelope-from mm@FreeBSD.org) Received: from sam.nabble.com (sam.nabble.com [216.139.236.26]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 8E2BF2E1D for ; Mon, 14 Oct 2013 11:09:29 +0000 (UTC) Received: from [192.168.236.26] (helo=sam.nabble.com) by sam.nabble.com with esmtp (Exim 4.72) (envelope-from ) id 1VVg1g-0005tN-DT for freebsd-pf@freebsd.org; Mon, 14 Oct 2013 04:09:28 -0700 Date: Mon, 14 Oct 2013 04:09:28 -0700 (PDT) From: mm To: freebsd-pf@freebsd.org Message-ID: <1381748968240-5851706.post@n5.nabble.com> In-Reply-To: <20130729133658.GB72360@glebius.int.ru> References: <20130729133658.GB72360@glebius.int.ru> Subject: Re: De-virtualize V_pf_mtag_z to eliminate kernel panics. MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-pf@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: "Technical discussion and general questions about packet filter \(pf\)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 14 Oct 2013 11:09:29 -0000 I confirm the panic - it reduces my FreeBSD 10-BETA1 test system to a uptime of max. 5 minutes and renders this system unusable. I consider this a serious bug for 10. Here is a updated version of Craig's patch to devirtualize V_pf_mtag_z: http://people.freebsd.org/~mm/patches/pf_mtag.patch -- View this message in context: http://freebsd.1045724.n5.nabble.com/De-virtualize-V-pf-mtag-z-to-eliminate-kernel-panics-tp5831803p5851706.html Sent from the freebsd-pf mailing list archive at Nabble.com. From owner-freebsd-pf@FreeBSD.ORG Mon Oct 14 20:20:54 2013 Return-Path: Delivered-To: freebsd-pf@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 0F29DE15 for ; Mon, 14 Oct 2013 20:20:54 +0000 (UTC) (envelope-from uros.gruber@gmail.com) Received: from mail-ie0-x230.google.com (mail-ie0-x230.google.com [IPv6:2607:f8b0:4001:c03::230]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id D52A024BB for ; Mon, 14 Oct 2013 20:20:53 +0000 (UTC) Received: by mail-ie0-f176.google.com with SMTP id u16so6004915iet.7 for ; Mon, 14 Oct 2013 13:20:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=1b6pUfS5Lfle+FHtFBzu69YiWm3A4Vsfv3t64b9SVT4=; b=D89q2VW1nm79GVp6jr/zsz+E9zWwjNeMXd9YZv60x9VtXIGF1rLAY1z85bOLtPsyqZ Rku7k67DoqXMpIDF35ZtCD4RY4VXacKG3BAPtGb6pCJWitI2e8McgxasdyVVF5S9ONJW P61+RveCvO9yr+qPg518y5UJeXpB7gN3IWQ3gsbp/q8f5wubb8/Y7ZbqcjrFkTus6Q/5 vBr8ne4jbdeZOk/jhcG+vJ9TQ0KJw+qN2mV4ipkTfUXATf54wKB9AxcOSlb5emoaZUs3 wy9ieY2RyViQvV5+RrlWUcYpK4BGU2uqrI/NCsDlQrvPv/D9J56h3Z0QZgE6fOJ+LKEF J/jw== MIME-Version: 1.0 X-Received: by 10.50.102.99 with SMTP id fn3mr14432861igb.5.1381782053176; Mon, 14 Oct 2013 13:20:53 -0700 (PDT) Received: by 10.64.19.132 with HTTP; Mon, 14 Oct 2013 13:20:53 -0700 (PDT) In-Reply-To: <525B41EA.8000501@bluerosetech.com> References: <525B41EA.8000501@bluerosetech.com> Date: Mon, 14 Oct 2013 22:20:53 +0200 Message-ID: Subject: Re: PF rule question From: =?UTF-8?B?VXJvxaEgR3J1YmVy?= To: Darren Pilgrim Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: freebsd-pf@freebsd.org X-BeenThere: freebsd-pf@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: "Technical discussion and general questions about packet filter \(pf\)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 14 Oct 2013 20:20:54 -0000 Hi Darren, I thought about anchors and also do some test with them. But the problem I'm seeing is that I need to get list of all rules for all active jails when starting or stopping a jail. At least I don't see a way to add or remove the rule from anchor except to replace all anchor rules. Am I missing something here or that was your idea? Regards Uros On 14 October 2013 02:59, Darren Pilgrim wro= te: > On 10/9/2013 3:54 PM, Uro=C5=A1 Gruber wrote: > >> Hi, >> >> I'm strugling to complete my pf firewall configuration with a bit more >> optimized rules. >> >> I have a few hudreds jails set up on network from 172.16.1.0 to >> 172.16.10.0 >> >> My goal is to deny access between jails, but allow a few exceptions for >> example all jails can connect to jails from 172.16.1.0 to 172.16.1.64. >> >> I've accomplished this with rules like >> >> pass on lo0 from $jailnet to 172.16.1.0/26 >> pass on lo0 from 172.16.1.1 to 172.16.1.1 >> >> I would like to know if there is a better way to write such rules mostly >> because all that jails are very dynamic in terms of >> runing,stoping/destroying etc. and also IP aliases are removed and added >> back continuously. >> > > Use an anchor for the "pass on lo0 from X to X" rules and a table for the > jailnet. Then have your jail provisioning scripts manipulate the table a= nd > anchor as jails come up and down. > > In /etc/pf.conf: > > table persist > pass on lo0 from to 172.16.1.0/26 > anchor > > When bringing up a jail: > > # pfctl -t jailnet -T add 192.0.2.65 > # pfctl -a jails -f - <<<"pass on lo0 from 192.0.2.65 to 192.0.2.65" > > When taking down a jail: > > # pfctl -t jailnet -T delete 192.0.2.65 > # pfctl -a jails -f - <<<"block on lo0 from 192.0.2.65 to 192.0.2.65" > # pfctl -k 192.0.2.65 > > You'll need to reload the table and anchor rules on a system restart. You > can do that with rules in /etc/pf.conf: > > table persist /path/to/jailnet_address_list > load anchor jails from /path/to/jails_rules_list > > or directly using pfctl: > > # pfctl -t jailnet -Ta -f /path/to/jailnet_address_list > # pfctl -a jails -f /path/to/jails_rules_list > From owner-freebsd-pf@FreeBSD.ORG Mon Oct 14 20:30:07 2013 Return-Path: Delivered-To: freebsd-pf@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 0A82C1AB for ; Mon, 14 Oct 2013 20:30:07 +0000 (UTC) (envelope-from uros.gruber@gmail.com) Received: from mail-ie0-x230.google.com (mail-ie0-x230.google.com [IPv6:2607:f8b0:4001:c03::230]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id CFB1F2528 for ; Mon, 14 Oct 2013 20:30:06 +0000 (UTC) Received: by mail-ie0-f176.google.com with SMTP id u16so6200908iet.21 for ; Mon, 14 Oct 2013 13:30:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=ebYOO2ZDy1FktTYKkKn4uO9mBxZHJLq5b0PTjjKErfM=; b=jhXHD01izGs8M+pTUtZGmup7OzMdXdPo+PsLN0MA+kBvPO/radHazieb4oaa5X3exh 8MtmEmahZtQ19mjwfr3KUwgNggLIY3a8sZvmTQsANZG74Wi7EHNRO+5Dp/4V+w5tP4dn sRG0/S95Pisf6m1VfBPXL9YQLsAa/XrQ8QHH+350FoWGHTB35o3eu3h4C8tr0WbnEZOy QZk+zgua7pPk9bIvcjTxiwbWNh20F8CTuKOpEU110VE8AGvN1hTPCxoli6D5rOLdrme8 AuGVrbsKN6/d8H42qU3VwF/1nh/t/mqdhIOnDOGZdr3xqdR96YuHhejzG1Glq23nNsqL dvyA== MIME-Version: 1.0 X-Received: by 10.50.102.99 with SMTP id fn3mr14457922igb.5.1381782606194; Mon, 14 Oct 2013 13:30:06 -0700 (PDT) Received: by 10.64.19.132 with HTTP; Mon, 14 Oct 2013 13:30:06 -0700 (PDT) In-Reply-To: References: <525B41EA.8000501@bluerosetech.com> Date: Mon, 14 Oct 2013 22:30:06 +0200 Message-ID: Subject: Re: PF rule question From: =?UTF-8?B?VXJvxaEgR3J1YmVy?= To: Darren Pilgrim Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: freebsd-pf@freebsd.org X-BeenThere: freebsd-pf@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: "Technical discussion and general questions about packet filter \(pf\)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 14 Oct 2013 20:30:07 -0000 Ok, one way of doing it is something like this: ( pfctl -a jails -sr ; echo "pass on lo0 from 192.0.2.65 to 192.0.2.65" ) | pfctl -a jails -f - But still, it's only for add the rule to the anchor. I need to work on something for delete the rule :) Regards Uros On 14 October 2013 22:20, Uro=C5=A1 Gruber wrote: > Hi Darren, > > I thought about anchors and also do some test with them. But the problem > I'm seeing is that I need to get list of all rules for all active jails > when starting or stopping a jail. At least I don't see a way to add or > remove the rule from anchor except to replace all anchor rules. > > Am I missing something here or that was your idea? > > Regards > > Uros > > > On 14 October 2013 02:59, Darren Pilgrim w= rote: > >> On 10/9/2013 3:54 PM, Uro=C5=A1 Gruber wrote: >> >>> Hi, >>> >>> I'm strugling to complete my pf firewall configuration with a bit more >>> optimized rules. >>> >>> I have a few hudreds jails set up on network from 172.16.1.0 to >>> 172.16.10.0 >>> >>> My goal is to deny access between jails, but allow a few exceptions for >>> example all jails can connect to jails from 172.16.1.0 to 172.16.1.64. >>> >>> I've accomplished this with rules like >>> >>> pass on lo0 from $jailnet to 172.16.1.0/26 >>> pass on lo0 from 172.16.1.1 to 172.16.1.1 >>> >>> I would like to know if there is a better way to write such rules mostl= y >>> because all that jails are very dynamic in terms of >>> runing,stoping/destroying etc. and also IP aliases are removed and adde= d >>> back continuously. >>> >> >> Use an anchor for the "pass on lo0 from X to X" rules and a table for th= e >> jailnet. Then have your jail provisioning scripts manipulate the table = and >> anchor as jails come up and down. >> >> In /etc/pf.conf: >> >> table persist >> pass on lo0 from to 172.16.1.0/26 >> anchor >> >> When bringing up a jail: >> >> # pfctl -t jailnet -T add 192.0.2.65 >> # pfctl -a jails -f - <<<"pass on lo0 from 192.0.2.65 to 192.0.2.65" >> >> When taking down a jail: >> >> # pfctl -t jailnet -T delete 192.0.2.65 >> # pfctl -a jails -f - <<<"block on lo0 from 192.0.2.65 to 192.0.2.65" >> # pfctl -k 192.0.2.65 >> >> You'll need to reload the table and anchor rules on a system restart. Yo= u >> can do that with rules in /etc/pf.conf: >> >> table persist /path/to/jailnet_address_list >> load anchor jails from /path/to/jails_rules_list >> >> or directly using pfctl: >> >> # pfctl -t jailnet -Ta -f /path/to/jailnet_address_list >> # pfctl -a jails -f /path/to/jails_rules_list >> > > From owner-freebsd-pf@FreeBSD.ORG Mon Oct 14 20:50:15 2013 Return-Path: Delivered-To: freebsd-pf@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 3F836CB2 for ; Mon, 14 Oct 2013 20:50:15 +0000 (UTC) (envelope-from kpaasial@gmail.com) Received: from mail-qe0-x230.google.com (mail-qe0-x230.google.com [IPv6:2607:f8b0:400d:c02::230]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id F39D126FF for ; Mon, 14 Oct 2013 20:50:14 +0000 (UTC) Received: by mail-qe0-f48.google.com with SMTP id d4so5468675qej.21 for ; Mon, 14 Oct 2013 13:50:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=KgCsPFOu3U4E3UEM79oNb29732KI4nXfRpBcnPy9h94=; b=OHo9j0oLMugACtjJ4PA4Z87puYvCKi4HWVtE289B5N5a0nhzOP1SU5ccPUQOJwBXzL rErsDm01OugJN+vL9NNjRSaVJy1S0HJS1kWvhitll/5Up9CsPCkzmV34An5BREQIP4b7 Pk6G4Rr4jD6Rt2KQ5IghqTEnmMg7Ws5QLO/FmgOfGd7k9xyiGyFv5/EWnGkgGahxcF73 rBnYfhP1cA0z5XrpFZEu6l68D5hxczA2GSVUKj3wMdBQo3tvD76Oii1ljGTReXeOhx98 hX9mQEHVruD9t1zQ7RCJJ2atJM31gQKf1K7KLVXIDLSPDlZt0Op5WeSL+flhQea0S5tz D07A== MIME-Version: 1.0 X-Received: by 10.49.76.6 with SMTP id g6mr29369230qew.41.1381783814135; Mon, 14 Oct 2013 13:50:14 -0700 (PDT) Received: by 10.96.180.233 with HTTP; Mon, 14 Oct 2013 13:50:14 -0700 (PDT) In-Reply-To: References: <525B41EA.8000501@bluerosetech.com> Date: Mon, 14 Oct 2013 23:50:14 +0300 Message-ID: Subject: Re: PF rule question From: Kimmo Paasiala To: =?UTF-8?B?VXJvxaEgR3J1YmVy?= Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Cc: Darren Pilgrim , "freebsd-pf@freebsd.org" X-BeenThere: freebsd-pf@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: "Technical discussion and general questions about packet filter \(pf\)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 14 Oct 2013 20:50:15 -0000 On Mon, Oct 14, 2013 at 11:30 PM, Uro=C5=A1 Gruber = wrote: > Ok, one way of doing it is something like this: > > ( pfctl -a jails -sr ; echo "pass on lo0 from 192.0.2.65 to 192.0.2.65" )= | > pfctl -a jails -f - > > But still, it's only for add the rule to the anchor. I need to work on > something for delete the rule :) > > Regards > > Uros > > You flush rules under an anchor like this: pfctl -a anchor -F rules -Kimmo From owner-freebsd-pf@FreeBSD.ORG Tue Oct 15 01:54:23 2013 Return-Path: Delivered-To: freebsd-pf@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 9AAC9CC4 for ; Tue, 15 Oct 2013 01:54:23 +0000 (UTC) (envelope-from jdavidlists@gmail.com) Received: from mail-ie0-x232.google.com (mail-ie0-x232.google.com [IPv6:2607:f8b0:4001:c03::232]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 68E7F2726 for ; Tue, 15 Oct 2013 01:54:23 +0000 (UTC) Received: by mail-ie0-f178.google.com with SMTP id to1so16211143ieb.23 for ; Mon, 14 Oct 2013 18:54:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date:message-id:subject :from:to:cc:content-type; bh=NUwOiy2a7XyYNJoXvb0VdW2kJemZ9eAEHCpQ0PwNpW0=; b=X3GF/zn0nO8KdcBunaNBqAT55fNyNe68soZtAZpgTATUSPcd8RTgoSDjM2moVE13S+ q/Q+HlZs6B9vVhD0L86Wnf2x846d6TBPyQT+UVJcGmiU0zCpXjrGtE/goBwAwHUrzK9V e9UTAVIS8FiYY2w+CA2W8WZlCGYh25Wtq5BavGb/uK8WD6o28aB1GJf4wdolNfeEcg7h 99sa9jBlwwmsmILo0EG7fsoEnfXNMM8Nv0RctAXezsOaupNoF6zpS5P0oJFFjSbEs31S oye5OACGVnFH58+17ci0dLorZ6hj1zyAm65y8NylqjSUzs1vZF+Ie+IqtmEqCCJttXAx hfyg== MIME-Version: 1.0 X-Received: by 10.43.60.139 with SMTP id ws11mr21851542icb.12.1381802062859; Mon, 14 Oct 2013 18:54:22 -0700 (PDT) Sender: jdavidlists@gmail.com Received: by 10.43.180.131 with HTTP; Mon, 14 Oct 2013 18:54:22 -0700 (PDT) In-Reply-To: References: <524EBFDD.7090604@insa-lyon.fr> <524EDE9E.2010109@insa-lyon.fr> <524EDF5F.20601@egr.msu.edu> Date: Mon, 14 Oct 2013 21:54:22 -0400 X-Google-Sender-Auth: JYltf5klE9_IkBAOSQtHeUGn8iY Message-ID: Subject: Re: pf deadly slow From: J David To: Daniel Ballenger Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-pf@freebsd.org X-BeenThere: freebsd-pf@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: "Technical discussion and general questions about packet filter \(pf\)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 15 Oct 2013 01:54:23 -0000 On Fri, Oct 4, 2013 at 6:20 PM, Daniel Ballenger wrote: > For what it's worth I'm running Freebsd 9.2-RELEASE on top of proxmox with > the virtio network driver and don't have this issue (easily pushes over > 100Mbps, doing over 60Mbps at the moment) Have a look at your /var/log/kern.log, /var/log/messages, and /var/log/syslog on the proxmox host. If you enable TSO and PF on a FreeBSD KVM virtual machine with virtio network drivers, PF will turn silently disable checksum offloading, and the Linux side will complain about improper checksums with complete kernel stack traces *on* *every* *packet*, to each file. Your network throughput then becomes a function of your disk I/O performance. :-/ The proxmox guys were going to patch out the log message, but even if they have now done that, it only masks the problem, it doesn't solve it. It does sound like you may have this issue because before a local kernel hack, we were also seeing about 100Mbps with TSO enabled. Post-hack performance is approaching 700Mbps. Not bad for a virtual machine. From owner-freebsd-pf@FreeBSD.ORG Wed Oct 16 07:16:45 2013 Return-Path: Delivered-To: freebsd-pf@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 7C4D7D87; Wed, 16 Oct 2013 07:16:45 +0000 (UTC) (envelope-from mm@FreeBSD.org) Received: from mail.vx.sk (mail.vx.sk [IPv6:2a01:4f8:150:6101::4]) by mx1.freebsd.org (Postfix) with ESMTP id 3E2E12645; Wed, 16 Oct 2013 07:16:45 +0000 (UTC) Received: from core.vx.sk (localhost [127.0.0.2]) by mail.vx.sk (Postfix) with ESMTP id 6B0FA4FA75; Wed, 16 Oct 2013 09:16:44 +0200 (CEST) X-Virus-Scanned: amavisd-new at mail.vx.sk Received: from mail.vx.sk by core.vx.sk (amavisd-new, unix socket) with LMTP id 5QtddLw3McJm; Wed, 16 Oct 2013 09:16:44 +0200 (CEST) Received: from [192.168.2.103] (dslb-092-078-050-209.pools.arcor-ip.net [92.78.50.209]) by mail.vx.sk (Postfix) with ESMTPSA id C94854FA6E; Wed, 16 Oct 2013 09:16:43 +0200 (CEST) Message-ID: <525E3D5B.5020903@FreeBSD.org> Date: Wed, 16 Oct 2013 09:16:43 +0200 From: Martin Matuska User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.0 MIME-Version: 1.0 To: Gleb Smirnoff , Marco Zec Subject: VIMAGE + PF crashes - possible solutions Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: Adrian Chadd , freebsd-pf@FreeBSD.org X-BeenThere: freebsd-pf@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: "Technical discussion and general questions about packet filter \(pf\)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 16 Oct 2013 07:16:45 -0000 Hi, I have encountered the same mtag panic Craig had with VIMAGE + PF and have reported this in a PR 182964: http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/182964 Here are two possible solutions I would like to discuss, both make the panic go away: 1.) de-virtualize the variable as Marco suggested, this solution is a more intrusive change to pf.c http://people.freebsd.org/~mm/patches/pf_mtag.patch 2.) add vnet context to struct m_tag, this is less intrusive to pf.c and the uma zone remains virtualized: http://people.freebsd.org/~mm/patches/pf_mtag.2.patch Which of the approaches should we take or is this to be solved in a completely different way? Anyway, after patching I have fired another panic, this time caused by missing vnet context in the pf overload task queue. I have discussed a solution for this one with Gleb and he committed it in r256587: http://svnweb.freebsd.org/base?view=revision&revision=256587 With both patches applied my VIMAGE + PF system runs stable. Thanks, mm From owner-freebsd-pf@FreeBSD.ORG Wed Oct 16 07:28:40 2013 Return-Path: Delivered-To: freebsd-pf@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 60B5014B; Wed, 16 Oct 2013 07:28:40 +0000 (UTC) (envelope-from glebius@FreeBSD.org) Received: from cell.glebius.int.ru (glebius.int.ru [81.19.69.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id A5EE426D2; Wed, 16 Oct 2013 07:28:38 +0000 (UTC) Received: from cell.glebius.int.ru (localhost [127.0.0.1]) by cell.glebius.int.ru (8.14.7/8.14.7) with ESMTP id r9G7SSls075514; Wed, 16 Oct 2013 11:28:28 +0400 (MSK) (envelope-from glebius@FreeBSD.org) Received: (from glebius@localhost) by cell.glebius.int.ru (8.14.7/8.14.7/Submit) id r9G7SSXn075513; Wed, 16 Oct 2013 11:28:28 +0400 (MSK) (envelope-from glebius@FreeBSD.org) X-Authentication-Warning: cell.glebius.int.ru: glebius set sender to glebius@FreeBSD.org using -f Date: Wed, 16 Oct 2013 11:28:28 +0400 From: Gleb Smirnoff To: Martin Matuska Subject: Re: VIMAGE + PF crashes - possible solutions Message-ID: <20131016072828.GM52889@glebius.int.ru> References: <525E3D5B.5020903@FreeBSD.org> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="bFsKbPszpzYNtEU6" Content-Disposition: inline In-Reply-To: <525E3D5B.5020903@FreeBSD.org> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: Adrian Chadd , Marco Zec , freebsd-pf@FreeBSD.org X-BeenThere: freebsd-pf@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: "Technical discussion and general questions about packet filter \(pf\)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 16 Oct 2013 07:28:40 -0000 --bFsKbPszpzYNtEU6 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline On Wed, Oct 16, 2013 at 09:16:43AM +0200, Martin Matuska wrote: M> Hi, I have encountered the same mtag panic Craig had with VIMAGE + PF M> and have reported this in a PR 182964: M> http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/182964 M> M> Here are two possible solutions I would like to discuss, both make the M> panic go away: M> M> 1.) de-virtualize the variable as Marco suggested, this solution is a M> more intrusive change to pf.c M> http://people.freebsd.org/~mm/patches/pf_mtag.patch M> M> 2.) add vnet context to struct m_tag, this is less intrusive to pf.c and M> the uma zone remains virtualized: M> http://people.freebsd.org/~mm/patches/pf_mtag.2.patch M> M> Which of the approaches should we take or is this to be solved in a M> completely different way? As I already said, both patches look incorrect. The correct way is to separate things we want to be global, and things that we want to be V. Then, put the latter on SYSINIT_VNET(). There is work in progress on that by Nikos. Unfortunately, he is now very busy with real life and his hacking is on hiatus. You can find his WIP patches in attachments. Although they aren't finished, the way is correct. I also checked in some of his code into projects/pf: http://svnweb.freebsd.org/base?view=revision&revision=251993 -- Totus tuus, Glebius. --bFsKbPszpzYNtEU6 Content-Type: text/x-diff; charset=us-ascii Content-Disposition: attachment; filename="pf.diff" Index: sys/net/pfvar.h =================================================================== --- sys/net/pfvar.h (revision 251294) +++ sys/net/pfvar.h (working copy) @@ -901,7 +901,6 @@ struct pf_ruleset *, struct pf_pdesc *, int); extern pflog_packet_t *pflog_packet_ptr; -#define V_pf_end_threads VNET(pf_end_threads) #endif /* _KERNEL */ #define PFSYNC_FLAG_SRCNODE 0x04 Index: sys/netpfil/pf/pf.c =================================================================== --- sys/netpfil/pf/pf.c (revision 251294) +++ sys/netpfil/pf/pf.c (working copy) @@ -300,8 +300,6 @@ int in4_cksum(struct mbuf *m, u_int8_t nxt, int off, int len); -VNET_DECLARE(int, pf_end_threads); - VNET_DEFINE(struct pf_limit, pf_limits[PF_LIMIT_MAX]); #define PACKET_LOOPED(pd) ((pd)->pf_mtag && \ @@ -359,15 +357,13 @@ SYSCTL_NODE(_net, OID_AUTO, pf, CTLFLAG_RW, 0, "pf(4)"); -VNET_DEFINE(u_long, pf_hashsize); -#define V_pf_hashsize VNET(pf_hashsize) -SYSCTL_VNET_UINT(_net_pf, OID_AUTO, states_hashsize, CTLFLAG_RDTUN, - &VNET_NAME(pf_hashsize), 0, "Size of pf(4) states hashtable"); +u_long pf_hashsize; +SYSCTL_UINT(_net_pf, OID_AUTO, states_hashsize, CTLFLAG_RDTUN, + &pf_hashsize, 0, "Size of pf(4) states hashtable"); -VNET_DEFINE(u_long, pf_srchashsize); -#define V_pf_srchashsize VNET(pf_srchashsize) -SYSCTL_VNET_UINT(_net_pf, OID_AUTO, source_nodes_hashsize, CTLFLAG_RDTUN, - &VNET_NAME(pf_srchashsize), 0, "Size of pf(4) source nodes hashtable"); +u_long pf_srchashsize; +SYSCTL_UINT(_net_pf, OID_AUTO, source_nodes_hashsize, CTLFLAG_RDTUN, + &pf_srchashsize, 0, "Size of pf(4) source nodes hashtable"); VNET_DEFINE(void *, pf_swi_cookie); @@ -698,12 +694,12 @@ struct pf_srchash *sh; u_int i; - TUNABLE_ULONG_FETCH("net.pf.states_hashsize", &V_pf_hashsize); - if (V_pf_hashsize == 0 || !powerof2(V_pf_hashsize)) - V_pf_hashsize = PF_HASHSIZ; - TUNABLE_ULONG_FETCH("net.pf.source_nodes_hashsize", &V_pf_srchashsize); - if (V_pf_srchashsize == 0 || !powerof2(V_pf_srchashsize)) - V_pf_srchashsize = PF_HASHSIZ / 4; + TUNABLE_ULONG_FETCH("net.pf.states_hashsize", &pf_hashsize); + if (pf_hashsize == 0 || !powerof2(pf_hashsize)) + pf_hashsize = PF_HASHSIZ; + TUNABLE_ULONG_FETCH("net.pf.source_nodes_hashsize", &pf_srchashsize); + if (pf_srchashsize == 0 || !powerof2(pf_srchashsize)) + pf_srchashsize = PF_HASHSIZ / 4; V_pf_hashseed = arc4random(); @@ -717,11 +713,11 @@ V_pf_state_key_z = uma_zcreate("pf state keys", sizeof(struct pf_state_key), pf_state_key_ctor, NULL, NULL, NULL, UMA_ALIGN_PTR, 0); - V_pf_keyhash = malloc(V_pf_hashsize * sizeof(struct pf_keyhash), + V_pf_keyhash = malloc(pf_hashsize * sizeof(struct pf_keyhash), M_PFHASH, M_WAITOK | M_ZERO); - V_pf_idhash = malloc(V_pf_hashsize * sizeof(struct pf_idhash), + V_pf_idhash = malloc(pf_hashsize * sizeof(struct pf_idhash), M_PFHASH, M_WAITOK | M_ZERO); - V_pf_hashmask = V_pf_hashsize - 1; + V_pf_hashmask = pf_hashsize - 1; for (i = 0, kh = V_pf_keyhash, ih = V_pf_idhash; i <= V_pf_hashmask; i++, kh++, ih++) { mtx_init(&kh->lock, "pf_keyhash", NULL, MTX_DEF); @@ -735,9 +731,9 @@ V_pf_limits[PF_LIMIT_SRC_NODES].zone = V_pf_sources_z; uma_zone_set_max(V_pf_sources_z, PFSNODE_HIWAT); uma_zone_set_warning(V_pf_sources_z, "PF source nodes limit reached"); - V_pf_srchash = malloc(V_pf_srchashsize * sizeof(struct pf_srchash), + V_pf_srchash = malloc(pf_srchashsize * sizeof(struct pf_srchash), M_PFHASH, M_WAITOK|M_ZERO); - V_pf_srchashmask = V_pf_srchashsize - 1; + V_pf_srchashmask = pf_srchashsize - 1; for (i = 0, sh = V_pf_srchash; i <= V_pf_srchashmask; i++, sh++) mtx_init(&sh->lock, "pf_srchash", NULL, MTX_DEF); @@ -757,13 +753,17 @@ STAILQ_INIT(&V_pf_sendqueue); SLIST_INIT(&V_pf_overloadqueue); TASK_INIT(&V_pf_overloadtask, 0, pf_overload_task, &V_pf_overloadqueue); - mtx_init(&pf_sendqueue_mtx, "pf send queue", NULL, MTX_DEF); - mtx_init(&pf_overloadqueue_mtx, "pf overload/flush queue", NULL, - MTX_DEF); + if (IS_DEFAULT_VNET(curvnet)) { + mtx_init(&pf_sendqueue_mtx, "pf send queue", NULL, MTX_DEF); + mtx_init(&pf_overloadqueue_mtx, "pf overload/flush queue", NULL, + MTX_DEF); + } /* Unlinked, but may be referenced rules. */ TAILQ_INIT(&V_pf_unlinked_rules); - mtx_init(&pf_unlnkdrules_mtx, "pf unlinked rules", NULL, MTX_DEF); + if (IS_DEFAULT_VNET(curvnet)) + mtx_init(&pf_unlnkdrules_mtx, "pf unlinked rules", NULL, MTX_DEF); + } void @@ -1309,68 +1309,35 @@ pf_purge_thread(void *v) { u_int idx = 0; + VNET_ITERATOR_DECL(vnet_iter); - CURVNET_SET((struct vnet *)v); - for (;;) { - PF_RULES_RLOCK(); - rw_sleep(pf_purge_thread, &pf_rules_lock, 0, "pftm", hz / 10); + tsleep(pf_purge_thread, PWAIT, "pftm", hz / 10); + VNET_LIST_RLOCK(); + VNET_FOREACH(vnet_iter) { + CURVNET_SET(vnet_iter); - if (V_pf_end_threads) { - /* - * To cleanse up all kifs and rules we need - * two runs: first one clears reference flags, - * then pf_purge_expired_states() doesn't - * raise them, and then second run frees. - */ - PF_RULES_RUNLOCK(); - pf_purge_unlinked_rules(); - pfi_kif_purge(); - - /* - * Now purge everything. - */ - pf_purge_expired_states(0, V_pf_hashmask); - pf_purge_expired_fragments(); - pf_purge_expired_src_nodes(); - - /* - * Now all kifs & rules should be unreferenced, - * thus should be successfully freed. - */ - pf_purge_unlinked_rules(); - pfi_kif_purge(); - - /* - * Announce success and exit. - */ - PF_RULES_RLOCK(); - V_pf_end_threads++; - PF_RULES_RUNLOCK(); - wakeup(pf_purge_thread); - kproc_exit(0); - } - PF_RULES_RUNLOCK(); - /* Process 1/interval fraction of the state table every run. */ idx = pf_purge_expired_states(idx, V_pf_hashmask / - (V_pf_default_rule.timeout[PFTM_INTERVAL] * 10)); + (V_pf_default_rule.timeout[PFTM_INTERVAL] * 10)); /* Purge other expired types every PFTM_INTERVAL seconds. */ if (idx == 0) { - /* - * Order is important: - * - states and src nodes reference rules - * - states and rules reference kifs - */ - pf_purge_expired_fragments(); - pf_purge_expired_src_nodes(); - pf_purge_unlinked_rules(); - pfi_kif_purge(); + /* + * Order is important: + * - states and src nodes reference rules + * - states and rules reference kifs + */ + pf_purge_expired_fragments(); + pf_purge_expired_src_nodes(); + pf_purge_unlinked_rules(); + pfi_kif_purge(); } + CURVNET_RESTORE(); + } + VNET_LIST_RUNLOCK(); } /* not reached */ - CURVNET_RESTORE(); } u_int32_t Index: sys/netpfil/pf/pf_if.c =================================================================== --- sys/netpfil/pf/pf_if.c (revision 251294) +++ sys/netpfil/pf/pf_if.c (working copy) @@ -110,7 +110,8 @@ V_pfi_buffer = malloc(V_pfi_buffer_max * sizeof(*V_pfi_buffer), PFI_MTYPE, M_WAITOK); - mtx_init(&pfi_unlnkdkifs_mtx, "pf unlinked interfaces", NULL, MTX_DEF); + if (IS_DEFAULT_VNET(curvnet)) + mtx_init(&pfi_unlnkdkifs_mtx, "pf unlinked interfaces", NULL, MTX_DEF); kif = malloc(sizeof(*kif), PFI_MTYPE, M_WAITOK); PF_RULES_WLOCK(); @@ -124,18 +125,20 @@ pfi_attach_ifnet(ifp); IFNET_RUNLOCK(); - pfi_attach_cookie = EVENTHANDLER_REGISTER(ifnet_arrival_event, - pfi_attach_ifnet_event, NULL, EVENTHANDLER_PRI_ANY); - pfi_detach_cookie = EVENTHANDLER_REGISTER(ifnet_departure_event, - pfi_detach_ifnet_event, NULL, EVENTHANDLER_PRI_ANY); - pfi_attach_group_cookie = EVENTHANDLER_REGISTER(group_attach_event, - pfi_attach_group_event, curvnet, EVENTHANDLER_PRI_ANY); - pfi_change_group_cookie = EVENTHANDLER_REGISTER(group_change_event, - pfi_change_group_event, curvnet, EVENTHANDLER_PRI_ANY); - pfi_detach_group_cookie = EVENTHANDLER_REGISTER(group_detach_event, - pfi_detach_group_event, curvnet, EVENTHANDLER_PRI_ANY); - pfi_ifaddr_event_cookie = EVENTHANDLER_REGISTER(ifaddr_event, - pfi_ifaddr_event, NULL, EVENTHANDLER_PRI_ANY); + if (IS_DEFAULT_VNET(curvnet)) { + pfi_attach_cookie = EVENTHANDLER_REGISTER(ifnet_arrival_event, + pfi_attach_ifnet_event, NULL, EVENTHANDLER_PRI_ANY); + pfi_detach_cookie = EVENTHANDLER_REGISTER(ifnet_departure_event, + pfi_detach_ifnet_event, NULL, EVENTHANDLER_PRI_ANY); + pfi_attach_group_cookie = EVENTHANDLER_REGISTER(group_attach_event, + pfi_attach_group_event, curvnet, EVENTHANDLER_PRI_ANY); + pfi_change_group_cookie = EVENTHANDLER_REGISTER(group_change_event, + pfi_change_group_event, curvnet, EVENTHANDLER_PRI_ANY); + pfi_detach_group_cookie = EVENTHANDLER_REGISTER(group_detach_event, + pfi_detach_group_event, curvnet, EVENTHANDLER_PRI_ANY); + pfi_ifaddr_event_cookie = EVENTHANDLER_REGISTER(ifaddr_event, + pfi_ifaddr_event, NULL, EVENTHANDLER_PRI_ANY); + } } void Index: sys/netpfil/pf/pf_ioctl.c =================================================================== --- sys/netpfil/pf/pf_ioctl.c (revision 251294) +++ sys/netpfil/pf/pf_ioctl.c (working copy) @@ -183,7 +183,6 @@ static volatile VNET_DEFINE(int, pf_pfil_hooked); #define V_pf_pfil_hooked VNET(pf_pfil_hooked) -VNET_DEFINE(int, pf_end_threads); struct rwlock pf_rules_lock; @@ -254,10 +253,13 @@ /* XXX do our best to avoid a conflict */ V_pf_status.hostid = arc4random(); - if ((error = kproc_create(pf_purge_thread, curvnet, NULL, 0, 0, - "pf purge")) != 0) - /* XXXGL: leaked all above. */ - return (error); + if (IS_DEFAULT_VNET(curvnet)) { + if ((error = kproc_create(pf_purge_thread, curvnet, NULL, 0, 0, + "pf purge")) != 0) { + /* XXXGL: leaked all above. */ + return (error); + } + } if ((error = swi_add(NULL, "pf send", pf_intr, curvnet, SWI_NET, INTR_MPSAFE, &V_pf_swi_cookie)) != 0) /* XXXGL: leaked all above. */ @@ -3631,24 +3633,22 @@ static int pf_load(void) { - int error; - VNET_ITERATOR_DECL(vnet_iter); + rw_init(&pf_rules_lock, "pf rulesets"); + pf_dev = make_dev(&pf_cdevsw, 0, 0, 0, 0600, PF_NAME); - VNET_LIST_RLOCK(); - VNET_FOREACH(vnet_iter) { - CURVNET_SET(vnet_iter); - V_pf_pfil_hooked = 0; - V_pf_end_threads = 0; - TAILQ_INIT(&V_pf_tags); - TAILQ_INIT(&V_pf_qids); - CURVNET_RESTORE(); - } - VNET_LIST_RUNLOCK(); + return (0); +} - rw_init(&pf_rules_lock, "pf rulesets"); +static int +vnet_pf_init(void) +{ + int error; - pf_dev = make_dev(&pf_cdevsw, 0, 0, 0, 0600, PF_NAME); + V_pf_pfil_hooked = 0; + TAILQ_INIT(&V_pf_tags); + TAILQ_INIT(&V_pf_qids); + if ((error = pfattach()) != 0) return (error); @@ -3676,11 +3676,6 @@ } PF_RULES_WLOCK(); shutdown_pf(); - V_pf_end_threads = 1; - while (V_pf_end_threads < 2) { - wakeup_one(pf_purge_thread); - rw_sleep(pf_purge_thread, &pf_rules_lock, 0, "pftmo", 0); - } pf_normalize_cleanup(); pfi_cleanup(); pfr_cleanup(); @@ -3727,3 +3722,6 @@ DECLARE_MODULE(pf, pf_mod, SI_SUB_PSEUDO, SI_ORDER_FIRST); MODULE_VERSION(pf, PF_MODVER); + +VNET_SYSINIT(vnet_pf_init, SI_SUB_PROTO_IFATTACHDOMAIN, SI_ORDER_ANY - 255, + vnet_pf_init, NULL); Index: sys/netpfil/pf/pf_norm.c =================================================================== --- sys/netpfil/pf/pf_norm.c (revision 251294) +++ sys/netpfil/pf/pf_norm.c (working copy) @@ -163,7 +163,8 @@ uma_zone_set_max(V_pf_frent_z, PFFRAG_FRENT_HIWAT); uma_zone_set_warning(V_pf_frent_z, "PF frag entries limit reached"); - mtx_init(&pf_frag_mtx, "pf fragments", NULL, MTX_DEF); + if (IS_DEFAULT_VNET(curvnet)) + mtx_init(&pf_frag_mtx, "pf fragments", NULL, MTX_DEF); TAILQ_INIT(&V_pf_fragqueue); TAILQ_INIT(&V_pf_cachequeue); Index: sys/netpfil/pf/pf_table.c =================================================================== --- sys/netpfil/pf/pf_table.c (revision 251294) +++ sys/netpfil/pf/pf_table.c (working copy) @@ -183,10 +183,14 @@ static RB_PROTOTYPE(pfr_ktablehead, pfr_ktable, pfrkt_tree, pfr_ktable_compare); static RB_GENERATE(pfr_ktablehead, pfr_ktable, pfrkt_tree, pfr_ktable_compare); -struct pfr_ktablehead pfr_ktables; +VNET_DEFINE(struct pfr_ktablehead, pfr_ktables); +#define V_pfr_ktables VNET(pfr_ktables) + struct pfr_table pfr_nulltable; -int pfr_ktable_cnt; +VNET_DEFINE(int, pfr_ktable_cnt); +#define V_pfr_ktable_cnt VNET(pfr_ktable_cnt) + void pfr_initialize(void) { @@ -1082,7 +1086,7 @@ return (ENOENT); SLIST_INIT(&workq); - RB_FOREACH(p, pfr_ktablehead, &pfr_ktables) { + RB_FOREACH(p, pfr_ktablehead, &V_pfr_ktables) { if (pfr_skip_table(filter, p, flags)) continue; if (!strcmp(p->pfrkt_anchor, PF_RESERVED_ANCHOR)) @@ -1117,7 +1121,7 @@ flags & PFR_FLAG_USERIOCTL)) senderr(EINVAL); key.pfrkt_flags |= PFR_TFLAG_ACTIVE; - p = RB_FIND(pfr_ktablehead, &pfr_ktables, &key); + p = RB_FIND(pfr_ktablehead, &V_pfr_ktables, &key); if (p == NULL) { p = pfr_create_ktable(&key.pfrkt_t, tzero, 1); if (p == NULL) @@ -1133,7 +1137,7 @@ /* find or create root table */ bzero(key.pfrkt_anchor, sizeof(key.pfrkt_anchor)); - r = RB_FIND(pfr_ktablehead, &pfr_ktables, &key); + r = RB_FIND(pfr_ktablehead, &V_pfr_ktables, &key); if (r != NULL) { p->pfrkt_root = r; goto _skip; @@ -1189,7 +1193,7 @@ if (pfr_validate_table(&key.pfrkt_t, 0, flags & PFR_FLAG_USERIOCTL)) return (EINVAL); - p = RB_FIND(pfr_ktablehead, &pfr_ktables, &key); + p = RB_FIND(pfr_ktablehead, &V_pfr_ktables, &key); if (p != NULL && (p->pfrkt_flags & PFR_TFLAG_ACTIVE)) { SLIST_FOREACH(q, &workq, pfrkt_workq) if (!pfr_ktable_compare(p, q)) @@ -1228,7 +1232,7 @@ *size = n; return (0); } - RB_FOREACH(p, pfr_ktablehead, &pfr_ktables) { + RB_FOREACH(p, pfr_ktablehead, &V_pfr_ktables) { if (pfr_skip_table(filter, p, flags)) continue; if (n-- <= 0) @@ -1263,7 +1267,7 @@ return (0); } SLIST_INIT(&workq); - RB_FOREACH(p, pfr_ktablehead, &pfr_ktables) { + RB_FOREACH(p, pfr_ktablehead, &V_pfr_ktables) { if (pfr_skip_table(filter, p, flags)) continue; if (n-- <= 0) @@ -1295,7 +1299,7 @@ bcopy(tbl + i, &key.pfrkt_t, sizeof(key.pfrkt_t)); if (pfr_validate_table(&key.pfrkt_t, 0, 0)) return (EINVAL); - p = RB_FIND(pfr_ktablehead, &pfr_ktables, &key); + p = RB_FIND(pfr_ktablehead, &V_pfr_ktables, &key); if (p != NULL) { SLIST_INSERT_HEAD(&workq, p, pfrkt_workq); xzero++; @@ -1327,7 +1331,7 @@ if (pfr_validate_table(&key.pfrkt_t, 0, flags & PFR_FLAG_USERIOCTL)) return (EINVAL); - p = RB_FIND(pfr_ktablehead, &pfr_ktables, &key); + p = RB_FIND(pfr_ktablehead, &V_pfr_ktables, &key); if (p != NULL && (p->pfrkt_flags & PFR_TFLAG_ACTIVE)) { p->pfrkt_nflags = (p->pfrkt_flags | setflag) & ~clrflag; @@ -1369,7 +1373,7 @@ if (rs == NULL) return (ENOMEM); SLIST_INIT(&workq); - RB_FOREACH(p, pfr_ktablehead, &pfr_ktables) { + RB_FOREACH(p, pfr_ktablehead, &V_pfr_ktables) { if (!(p->pfrkt_flags & PFR_TFLAG_INACTIVE) || pfr_skip_table(trs, p, 0)) continue; @@ -1414,7 +1418,7 @@ return (EBUSY); tbl->pfrt_flags |= PFR_TFLAG_INACTIVE; SLIST_INIT(&tableq); - kt = RB_FIND(pfr_ktablehead, &pfr_ktables, (struct pfr_ktable *)tbl); + kt = RB_FIND(pfr_ktablehead, &V_pfr_ktables, (struct pfr_ktable *)tbl); if (kt == NULL) { kt = pfr_create_ktable(tbl, 0, 1); if (kt == NULL) @@ -1427,7 +1431,7 @@ /* find or create root table */ bzero(&key, sizeof(key)); strlcpy(key.pfrkt_name, tbl->pfrt_name, sizeof(key.pfrkt_name)); - rt = RB_FIND(pfr_ktablehead, &pfr_ktables, &key); + rt = RB_FIND(pfr_ktablehead, &V_pfr_ktables, &key); if (rt != NULL) { kt->pfrkt_root = rt; goto _skip; @@ -1504,7 +1508,7 @@ if (rs == NULL || !rs->topen || ticket != rs->tticket) return (0); SLIST_INIT(&workq); - RB_FOREACH(p, pfr_ktablehead, &pfr_ktables) { + RB_FOREACH(p, pfr_ktablehead, &V_pfr_ktables) { if (!(p->pfrkt_flags & PFR_TFLAG_INACTIVE) || pfr_skip_table(trs, p, 0)) continue; @@ -1540,7 +1544,7 @@ return (EBUSY); SLIST_INIT(&workq); - RB_FOREACH(p, pfr_ktablehead, &pfr_ktables) { + RB_FOREACH(p, pfr_ktablehead, &V_pfr_ktables) { if (!(p->pfrkt_flags & PFR_TFLAG_INACTIVE) || pfr_skip_table(trs, p, 0)) continue; @@ -1686,7 +1690,7 @@ PF_RULES_ASSERT(); if (flags & PFR_FLAG_ALLRSETS) - return (pfr_ktable_cnt); + return (V_pfr_ktable_cnt); if (filter->pfrt_anchor[0]) { rs = pf_find_ruleset(filter->pfrt_anchor); return ((rs != NULL) ? rs->tables : -1); @@ -1719,8 +1723,8 @@ PF_RULES_WASSERT(); - RB_INSERT(pfr_ktablehead, &pfr_ktables, kt); - pfr_ktable_cnt++; + RB_INSERT(pfr_ktablehead, &V_pfr_ktables, kt); + V_pfr_ktable_cnt++; if (kt->pfrkt_root != NULL) if (!kt->pfrkt_root->pfrkt_refcnt[PFR_REFCNT_ANCHOR]++) pfr_setflags_ktable(kt->pfrkt_root, @@ -1751,14 +1755,14 @@ if (!(newf & PFR_TFLAG_ACTIVE)) newf &= ~PFR_TFLAG_USRMASK; if (!(newf & PFR_TFLAG_SETMASK)) { - RB_REMOVE(pfr_ktablehead, &pfr_ktables, kt); + RB_REMOVE(pfr_ktablehead, &V_pfr_ktables, kt); if (kt->pfrkt_root != NULL) if (!--kt->pfrkt_root->pfrkt_refcnt[PFR_REFCNT_ANCHOR]) pfr_setflags_ktable(kt->pfrkt_root, kt->pfrkt_root->pfrkt_flags & ~PFR_TFLAG_REFDANCHOR); pfr_destroy_ktable(kt, 1); - pfr_ktable_cnt--; + V_pfr_ktable_cnt--; return; } if (!(newf & PFR_TFLAG_ACTIVE) && kt->pfrkt_cnt) { @@ -1883,7 +1887,7 @@ pfr_lookup_table(struct pfr_table *tbl) { /* struct pfr_ktable start like a struct pfr_table */ - return (RB_FIND(pfr_ktablehead, &pfr_ktables, + return (RB_FIND(pfr_ktablehead, &V_pfr_ktables, (struct pfr_ktable *)tbl)); } --bFsKbPszpzYNtEU6 Content-Type: text/plain; charset=us-ascii Content-Disposition: attachment; filename="pf.diff.txt" Index: sys/net/pfvar.h =================================================================== --- sys/net/pfvar.h (revision 252856) +++ sys/net/pfvar.h (working copy) @@ -1701,6 +1701,11 @@ VNET_DECLARE(struct pf_rulequeue, pf_unlinked_rule void pf_initialize(void); void pf_cleanup(void); +void pf_overloadqueue_mtx_init(void); +void pf_sendqueue_mtx_init(void); +void pfi_unlnkdkifs_mtx_init(void); +void pf_unlnkdrules_mtx_init(void); +void pf_frag_mtx_init(void); struct pf_mtag *pf_get_mtag(struct mbuf *); Index: sys/netpfil/pf/pf.c =================================================================== --- sys/netpfil/pf/pf.c (revision 252856) +++ sys/netpfil/pf/pf.c (working copy) @@ -755,16 +755,34 @@ pf_initialize() STAILQ_INIT(&V_pf_sendqueue); SLIST_INIT(&V_pf_overloadqueue); TASK_INIT(&V_pf_overloadtask, 0, pf_overload_task, &V_pf_overloadqueue); - mtx_init(&pf_sendqueue_mtx, "pf send queue", NULL, MTX_DEF); - mtx_init(&pf_overloadqueue_mtx, "pf overload/flush queue", NULL, - MTX_DEF); /* Unlinked, but may be referenced rules. */ TAILQ_INIT(&V_pf_unlinked_rules); +} + +void +pf_unlnkdrules_mtx_init() +{ + mtx_init(&pf_unlnkdrules_mtx, "pf unlinked rules", NULL, MTX_DEF); } void +pf_overloadqueue_mtx_init() +{ + + mtx_init(&pf_overloadqueue_mtx, "pf overload/flush queue", NULL, + MTX_DEF); +} + +void +pf_sendqueue_mtx_init() +{ + + mtx_init(&pf_sendqueue_mtx, "pf send queue", NULL, MTX_DEF); +} + +void pf_cleanup() { struct pf_keyhash *kh; Index: sys/netpfil/pf/pf_if.c =================================================================== --- sys/netpfil/pf/pf_if.c (revision 252856) +++ sys/netpfil/pf/pf_if.c (working copy) @@ -100,6 +100,13 @@ static VNET_DEFINE(struct pfi_list, pfi_unlinked_k static struct mtx pfi_unlnkdkifs_mtx; void +pfi_unlnkdkifs_mtx_init() +{ + + mtx_init(&pfi_unlnkdkifs_mtx, "pf unlinked interfaces", NULL, MTX_DEF); +} + +void pfi_initialize(void) { struct ifg_group *ifg; @@ -110,8 +117,6 @@ pfi_initialize(void) V_pfi_buffer = malloc(V_pfi_buffer_max * sizeof(*V_pfi_buffer), PFI_MTYPE, M_WAITOK); - mtx_init(&pfi_unlnkdkifs_mtx, "pf unlinked interfaces", NULL, MTX_DEF); - kif = malloc(sizeof(*kif), PFI_MTYPE, M_WAITOK); PF_RULES_WLOCK(); V_pfi_all = pfi_kif_attach(kif, IFG_ALL); Index: sys/netpfil/pf/pf_ioctl.c =================================================================== --- sys/netpfil/pf/pf_ioctl.c (revision 252856) +++ sys/netpfil/pf/pf_ioctl.c (working copy) @@ -3629,28 +3629,31 @@ dehook_pf(void) } static int -pf_load(void) +vnet_pf_init(void) { int error; - VNET_ITERATOR_DECL(vnet_iter); + TAILQ_INIT(&V_pf_tags); + TAILQ_INIT(&V_pf_qids); - VNET_LIST_RLOCK(); - VNET_FOREACH(vnet_iter) { - CURVNET_SET(vnet_iter); - V_pf_pfil_hooked = 0; - V_pf_end_threads = 0; - TAILQ_INIT(&V_pf_tags); - TAILQ_INIT(&V_pf_qids); - CURVNET_RESTORE(); - } - VNET_LIST_RUNLOCK(); + if ((error = pfattach()) != 0) + return (error); + return (0); +} + +static int +pf_load(void) +{ + rw_init(&pf_rules_lock, "pf rulesets"); + pf_sendqueue_mtx_init(); + pf_overloadqueue_mtx_init(); + pf_unlnkdrules_mtx_init(); + pfi_unlnkdkifs_mtx_init(); + pf_frag_mtx_init(); pf_dev = make_dev(&pf_cdevsw, 0, 0, 0, 0600, PF_NAME); - if ((error = pfattach()) != 0) - return (error); return (0); } @@ -3727,3 +3730,5 @@ static moduledata_t pf_mod = { DECLARE_MODULE(pf, pf_mod, SI_SUB_PSEUDO, SI_ORDER_FIRST); MODULE_VERSION(pf, PF_MODVER); + +VNET_SYSINIT(vnet_pf_init, SI_SUB_PROTO_IFATTACHDOMAIN, SI_ORDER_ANY - 255, vnet_pf_init, NULL); Index: sys/netpfil/pf/pf_norm.c =================================================================== --- sys/netpfil/pf/pf_norm.c (revision 252856) +++ sys/netpfil/pf/pf_norm.c (working copy) @@ -163,8 +163,6 @@ pf_normalize_init(void) uma_zone_set_max(V_pf_frent_z, PFFRAG_FRENT_HIWAT); uma_zone_set_warning(V_pf_frent_z, "PF frag entries limit reached"); - mtx_init(&pf_frag_mtx, "pf fragments", NULL, MTX_DEF); - TAILQ_INIT(&V_pf_fragqueue); TAILQ_INIT(&V_pf_cachequeue); } @@ -180,6 +178,13 @@ pf_normalize_cleanup(void) mtx_destroy(&pf_frag_mtx); } +void +pf_frag_mtx_init() +{ + + mtx_init(&pf_frag_mtx, "pf fragments", NULL, MTX_DEF); +} + static int pf_frag_compare(struct pf_fragment *a, struct pf_fragment *b) { --bFsKbPszpzYNtEU6 Content-Type: text/plain; charset=us-ascii Content-Disposition: attachment; filename="pf.patch.txt" Index: sys/net/pfvar.h =================================================================== --- sys/net/pfvar.h (revision 252114) +++ sys/net/pfvar.h (working copy) @@ -227,6 +227,17 @@ extern struct rwlock pf_rules_lock; #define PF_RULES_RASSERT() rw_assert(&pf_rules_lock, RA_RLOCKED) #define PF_RULES_WASSERT() rw_assert(&pf_rules_lock, RA_WLOCKED) +extern struct mtx pf_sendqueue_mtx; +#define PF_SENDQ_LOCK() mtx_lock(&pf_sendqueue_mtx) +#define PF_SENDQ_UNLOCK() mtx_unlock(&pf_sendqueue_mtx) + +extern struct mtx pf_overloadqueue_mtx; +#define PF_OVERLOADQ_LOCK() mtx_lock(&pf_overloadqueue_mtx) +#define PF_OVERLOADQ_UNLOCK() mtx_unlock(&pf_overloadqueue_mtx) + +extern struct mtx pfi_unlnkdkifs_mtx; +extern struct mtx pf_frag_mtx; + #define PF_MODVER 1 #define PFLOG_MODVER 1 #define PFSYNC_MODVER 1 Index: sys/netpfil/pf/pf.c =================================================================== --- sys/netpfil/pf/pf.c (revision 252114) +++ sys/netpfil/pf/pf.c (working copy) @@ -157,9 +157,7 @@ STAILQ_HEAD(pf_send_head, pf_send_entry); static VNET_DEFINE(struct pf_send_head, pf_sendqueue); #define V_pf_sendqueue VNET(pf_sendqueue) -static struct mtx pf_sendqueue_mtx; -#define PF_SENDQ_LOCK() mtx_lock(&pf_sendqueue_mtx) -#define PF_SENDQ_UNLOCK() mtx_unlock(&pf_sendqueue_mtx) +struct mtx pf_sendqueue_mtx; /* * Queue for pf_overload_task() tasks. @@ -178,9 +176,7 @@ static VNET_DEFINE(struct pf_overload_head, pf_ove static VNET_DEFINE(struct task, pf_overloadtask); #define V_pf_overloadtask VNET(pf_overloadtask) -static struct mtx pf_overloadqueue_mtx; -#define PF_OVERLOADQ_LOCK() mtx_lock(&pf_overloadqueue_mtx) -#define PF_OVERLOADQ_UNLOCK() mtx_unlock(&pf_overloadqueue_mtx) +struct mtx pf_overloadqueue_mtx; VNET_DEFINE(struct pf_rulequeue, pf_unlinked_rules); struct mtx pf_unlnkdrules_mtx; @@ -755,13 +751,9 @@ pf_initialize() STAILQ_INIT(&V_pf_sendqueue); SLIST_INIT(&V_pf_overloadqueue); TASK_INIT(&V_pf_overloadtask, 0, pf_overload_task, &V_pf_overloadqueue); - mtx_init(&pf_sendqueue_mtx, "pf send queue", NULL, MTX_DEF); - mtx_init(&pf_overloadqueue_mtx, "pf overload/flush queue", NULL, - MTX_DEF); /* Unlinked, but may be referenced rules. */ TAILQ_INIT(&V_pf_unlinked_rules); - mtx_init(&pf_unlnkdrules_mtx, "pf unlinked rules", NULL, MTX_DEF); } void Index: sys/netpfil/pf/pf_if.c =================================================================== --- sys/netpfil/pf/pf_if.c (revision 252114) +++ sys/netpfil/pf/pf_if.c (working copy) @@ -97,7 +97,7 @@ MALLOC_DEFINE(PFI_MTYPE, "pf_ifnet", "pf(4) interf LIST_HEAD(pfi_list, pfi_kif); static VNET_DEFINE(struct pfi_list, pfi_unlinked_kifs); #define V_pfi_unlinked_kifs VNET(pfi_unlinked_kifs) -static struct mtx pfi_unlnkdkifs_mtx; +struct mtx pfi_unlnkdkifs_mtx; void pfi_initialize(void) @@ -110,8 +110,6 @@ pfi_initialize(void) V_pfi_buffer = malloc(V_pfi_buffer_max * sizeof(*V_pfi_buffer), PFI_MTYPE, M_WAITOK); - mtx_init(&pfi_unlnkdkifs_mtx, "pf unlinked interfaces", NULL, MTX_DEF); - kif = malloc(sizeof(*kif), PFI_MTYPE, M_WAITOK); PF_RULES_WLOCK(); V_pfi_all = pfi_kif_attach(kif, IFG_ALL); Index: sys/netpfil/pf/pf_ioctl.c =================================================================== --- sys/netpfil/pf/pf_ioctl.c (revision 252114) +++ sys/netpfil/pf/pf_ioctl.c (working copy) @@ -3629,28 +3629,32 @@ dehook_pf(void) } static int -pf_load(void) +vnet_pf_init(void) { int error; - VNET_ITERATOR_DECL(vnet_iter); + V_pf_pfil_hooked = 0; + TAILQ_INIT(&V_pf_tags); + TAILQ_INIT(&V_pf_qids); - VNET_LIST_RLOCK(); - VNET_FOREACH(vnet_iter) { - CURVNET_SET(vnet_iter); - V_pf_pfil_hooked = 0; - V_pf_end_threads = 0; - TAILQ_INIT(&V_pf_tags); - TAILQ_INIT(&V_pf_qids); - CURVNET_RESTORE(); - } - VNET_LIST_RUNLOCK(); + if ((error = pfattach()) != 0) + return (error); + return (0); +} + +static int +pf_load(void) +{ + rw_init(&pf_rules_lock, "pf rulesets"); - pf_dev = make_dev(&pf_cdevsw, 0, 0, 0, 0600, PF_NAME); - if ((error = pfattach()) != 0) - return (error); + mtx_init(&pf_sendqueue_mtx, "pf send queue", NULL, MTX_DEF); + mtx_init(&pf_overloadqueue_mtx, "pf overload/flush queue", NULL, + MTX_DEF); + mtx_init(&pf_unlnkdrules_mtx, "pf unlinked rules", NULL, MTX_DEF); + mtx_init(&pfi_unlnkdkifs_mtx, "pf unlinked interfaces", NULL, MTX_DEF); + mtx_init(&pf_frag_mtx, "pf fragments", NULL, MTX_DEF); return (0); } @@ -3727,3 +3731,5 @@ static moduledata_t pf_mod = { DECLARE_MODULE(pf, pf_mod, SI_SUB_PSEUDO, SI_ORDER_FIRST); MODULE_VERSION(pf, PF_MODVER); + +VNET_SYSINIT(vnet_pf_init, SI_SUB_PROTO_IFATTACHDOMAIN, SI_ORDER_ANY - 255, vnet_pf_init, NULL); Index: sys/netpfil/pf/pf_norm.c =================================================================== --- sys/netpfil/pf/pf_norm.c (revision 252114) +++ sys/netpfil/pf/pf_norm.c (working copy) @@ -92,7 +92,7 @@ struct pf_fragment { LIST_HEAD(, pf_frent) fr_queue; }; -static struct mtx pf_frag_mtx; +struct mtx pf_frag_mtx; #define PF_FRAG_LOCK() mtx_lock(&pf_frag_mtx) #define PF_FRAG_UNLOCK() mtx_unlock(&pf_frag_mtx) #define PF_FRAG_ASSERT() mtx_assert(&pf_frag_mtx, MA_OWNED) @@ -163,8 +163,6 @@ pf_normalize_init(void) uma_zone_set_max(V_pf_frent_z, PFFRAG_FRENT_HIWAT); uma_zone_set_warning(V_pf_frent_z, "PF frag entries limit reached"); - mtx_init(&pf_frag_mtx, "pf fragments", NULL, MTX_DEF); - TAILQ_INIT(&V_pf_fragqueue); TAILQ_INIT(&V_pf_cachequeue); } --bFsKbPszpzYNtEU6 Content-Type: text/plain; charset=us-ascii Content-Disposition: attachment; filename="pf_de-virt_patch.txt" Index: sys/net/pfvar.h =================================================================== --- sys/net/pfvar.h (revision 251794) +++ sys/net/pfvar.h (working copy) @@ -1659,19 +1659,17 @@ struct pf_idhash { struct mtx lock; }; +extern u_long pf_hashmask; +extern u_long pf_srchashmask; #define PF_HASHSIZ (32768) VNET_DECLARE(struct pf_keyhash *, pf_keyhash); VNET_DECLARE(struct pf_idhash *, pf_idhash); -VNET_DECLARE(u_long, pf_hashmask); #define V_pf_keyhash VNET(pf_keyhash) #define V_pf_idhash VNET(pf_idhash) -#define V_pf_hashmask VNET(pf_hashmask) VNET_DECLARE(struct pf_srchash *, pf_srchash); -VNET_DECLARE(u_long, pf_srchashmask); #define V_pf_srchash VNET(pf_srchash) -#define V_pf_srchashmask VNET(pf_srchashmask) -#define PF_IDHASH(s) (be64toh((s)->id) % (V_pf_hashmask + 1)) +#define PF_IDHASH(s) (be64toh((s)->id) % (pf_hashmask + 1)) VNET_DECLARE(void *, pf_swi_cookie); #define V_pf_swi_cookie VNET(pf_swi_cookie) Index: sys/netpfil/pf/if_pfsync.c =================================================================== --- sys/netpfil/pf/if_pfsync.c (revision 251794) +++ sys/netpfil/pf/if_pfsync.c (working copy) @@ -683,7 +683,7 @@ pfsync_in_clr(struct pfsync_pkt *pkt, struct mbuf pfi_kif_find(clr[i].ifname) == NULL) continue; - for (int i = 0; i <= V_pf_hashmask; i++) { + for (int i = 0; i <= pf_hashmask; i++) { struct pf_idhash *ih = &V_pf_idhash[i]; struct pf_state *s; relock: @@ -2045,7 +2045,7 @@ pfsync_bulk_update(void *arg) else i = sc->sc_bulk_hashid; - for (; i <= V_pf_hashmask; i++) { + for (; i <= pf_hashmask; i++) { struct pf_idhash *ih = &V_pf_idhash[i]; if (s != NULL) Index: sys/netpfil/pf/pf.c =================================================================== --- sys/netpfil/pf/pf.c (revision 251794) +++ sys/netpfil/pf/pf.c (working copy) @@ -353,21 +353,19 @@ VNET_DEFINE(struct pf_limit, pf_limits[PF_LIMIT_MA static MALLOC_DEFINE(M_PFHASH, "pf_hash", "pf(4) hash header structures"); VNET_DEFINE(struct pf_keyhash *, pf_keyhash); VNET_DEFINE(struct pf_idhash *, pf_idhash); -VNET_DEFINE(u_long, pf_hashmask); VNET_DEFINE(struct pf_srchash *, pf_srchash); -VNET_DEFINE(u_long, pf_srchashmask); SYSCTL_NODE(_net, OID_AUTO, pf, CTLFLAG_RW, 0, "pf(4)"); -VNET_DEFINE(u_long, pf_hashsize); -#define V_pf_hashsize VNET(pf_hashsize) -SYSCTL_VNET_UINT(_net_pf, OID_AUTO, states_hashsize, CTLFLAG_RDTUN, - &VNET_NAME(pf_hashsize), 0, "Size of pf(4) states hashtable"); +u_long pf_hashmask; +u_long pf_srchashmask; +static u_long pf_hashsize; +static u_long pf_srchashsize; -VNET_DEFINE(u_long, pf_srchashsize); -#define V_pf_srchashsize VNET(pf_srchashsize) -SYSCTL_VNET_UINT(_net_pf, OID_AUTO, source_nodes_hashsize, CTLFLAG_RDTUN, - &VNET_NAME(pf_srchashsize), 0, "Size of pf(4) source nodes hashtable"); +SYSCTL_UINT(_net_pf, OID_AUTO, states_hashsize, CTLFLAG_RDTUN, + &pf_hashsize, 0, "Size of pf(4) states hashtable"); +SYSCTL_UINT(_net_pf, OID_AUTO, source_nodes_hashsize, CTLFLAG_RDTUN, + &pf_srchashsize, 0, "Size of pf(4) source nodes hashtable"); VNET_DEFINE(void *, pf_swi_cookie); @@ -383,7 +381,7 @@ pf_hashkey(struct pf_state_key *sk) sizeof(struct pf_state_key_cmp)/sizeof(uint32_t), V_pf_hashseed); - return (h & V_pf_hashmask); + return (h & pf_hashmask); } static __inline uint32_t @@ -404,7 +402,7 @@ pf_hashsrc(struct pf_addr *addr, sa_family_t af) panic("%s: unknown address family %u", __func__, af); } - return (h & V_pf_srchashmask); + return (h & pf_srchashmask); } #ifdef INET6 @@ -566,7 +564,7 @@ pf_overload_task(void *c, int pending) if (SLIST_EMPTY(&queue)) return; - for (int i = 0; i <= V_pf_hashmask; i++) { + for (int i = 0; i <= pf_hashmask; i++) { struct pf_idhash *ih = &V_pf_idhash[i]; struct pf_state_key *sk; struct pf_state *s; @@ -698,12 +696,12 @@ pf_initialize() struct pf_srchash *sh; u_int i; - TUNABLE_ULONG_FETCH("net.pf.states_hashsize", &V_pf_hashsize); - if (V_pf_hashsize == 0 || !powerof2(V_pf_hashsize)) - V_pf_hashsize = PF_HASHSIZ; - TUNABLE_ULONG_FETCH("net.pf.source_nodes_hashsize", &V_pf_srchashsize); - if (V_pf_srchashsize == 0 || !powerof2(V_pf_srchashsize)) - V_pf_srchashsize = PF_HASHSIZ / 4; + TUNABLE_ULONG_FETCH("net.pf.states_hashsize", &pf_hashsize); + if (pf_hashsize == 0 || !powerof2(pf_hashsize)) + pf_hashsize = PF_HASHSIZ; + TUNABLE_ULONG_FETCH("net.pf.source_nodes_hashsize", &pf_srchashsize); + if (pf_srchashsize == 0 || !powerof2(pf_srchashsize)) + pf_srchashsize = PF_HASHSIZ / 4; V_pf_hashseed = arc4random(); @@ -717,12 +715,12 @@ pf_initialize() V_pf_state_key_z = uma_zcreate("pf state keys", sizeof(struct pf_state_key), pf_state_key_ctor, NULL, NULL, NULL, UMA_ALIGN_PTR, 0); - V_pf_keyhash = malloc(V_pf_hashsize * sizeof(struct pf_keyhash), + V_pf_keyhash = malloc(pf_hashsize * sizeof(struct pf_keyhash), M_PFHASH, M_WAITOK | M_ZERO); - V_pf_idhash = malloc(V_pf_hashsize * sizeof(struct pf_idhash), + V_pf_idhash = malloc(pf_hashsize * sizeof(struct pf_idhash), M_PFHASH, M_WAITOK | M_ZERO); - V_pf_hashmask = V_pf_hashsize - 1; - for (i = 0, kh = V_pf_keyhash, ih = V_pf_idhash; i <= V_pf_hashmask; + pf_hashmask = pf_hashsize - 1; + for (i = 0, kh = V_pf_keyhash, ih = V_pf_idhash; i <= pf_hashmask; i++, kh++, ih++) { mtx_init(&kh->lock, "pf_keyhash", NULL, MTX_DEF | MTX_DUPOK); mtx_init(&ih->lock, "pf_idhash", NULL, MTX_DEF); @@ -735,10 +733,10 @@ pf_initialize() V_pf_limits[PF_LIMIT_SRC_NODES].zone = V_pf_sources_z; uma_zone_set_max(V_pf_sources_z, PFSNODE_HIWAT); uma_zone_set_warning(V_pf_sources_z, "PF source nodes limit reached"); - V_pf_srchash = malloc(V_pf_srchashsize * sizeof(struct pf_srchash), + V_pf_srchash = malloc(pf_srchashsize * sizeof(struct pf_srchash), M_PFHASH, M_WAITOK|M_ZERO); - V_pf_srchashmask = V_pf_srchashsize - 1; - for (i = 0, sh = V_pf_srchash; i <= V_pf_srchashmask; i++, sh++) + pf_srchashmask = pf_srchashsize - 1; + for (i = 0, sh = V_pf_srchash; i <= pf_srchashmask; i++, sh++) mtx_init(&sh->lock, "pf_srchash", NULL, MTX_DEF); /* ALTQ */ @@ -775,7 +773,7 @@ pf_cleanup() struct pf_send_entry *pfse, *next; u_int i; - for (i = 0, kh = V_pf_keyhash, ih = V_pf_idhash; i <= V_pf_hashmask; + for (i = 0, kh = V_pf_keyhash, ih = V_pf_idhash; i <= pf_hashmask; i++, kh++, ih++) { KASSERT(LIST_EMPTY(&kh->keys), ("%s: key hash not empty", __func__)); @@ -787,7 +785,7 @@ pf_cleanup() free(V_pf_keyhash, M_PFHASH); free(V_pf_idhash, M_PFHASH); - for (i = 0, sh = V_pf_srchash; i <= V_pf_srchashmask; i++, sh++) { + for (i = 0, sh = V_pf_srchash; i <= pf_srchashmask; i++, sh++) { KASSERT(LIST_EMPTY(&sh->nodes), ("%s: source node hash not empty", __func__)); mtx_destroy(&sh->lock); @@ -1177,7 +1175,7 @@ pf_find_state_byid(uint64_t id, uint32_t creatorid V_pf_status.fcounters[FCNT_STATE_SEARCH]++; - ih = &V_pf_idhash[(be64toh(id) % (V_pf_hashmask + 1))]; + ih = &V_pf_idhash[(be64toh(id) % (pf_hashmask + 1))]; PF_HASHROW_LOCK(ih); LIST_FOREACH(s, &ih->states, entry) @@ -1373,7 +1371,7 @@ pf_purge_thread(void *v) /* * Now purge everything. */ - pf_purge_expired_states(0, V_pf_hashmask); + pf_purge_expired_states(0, pf_hashmask); pf_purge_expired_fragments(); pf_purge_expired_src_nodes(); @@ -1396,7 +1394,7 @@ pf_purge_thread(void *v) PF_RULES_RUNLOCK(); /* Process 1/interval fraction of the state table every run. */ - idx = pf_purge_expired_states(idx, V_pf_hashmask / + idx = pf_purge_expired_states(idx, pf_hashmask / (V_pf_default_rule.timeout[PFTM_INTERVAL] * 10)); /* Purge other expired types every PFTM_INTERVAL seconds. */ @@ -1462,7 +1460,7 @@ pf_purge_expired_src_nodes() struct pf_src_node *cur, *next; int i; - for (i = 0, sh = V_pf_srchash; i <= V_pf_srchashmask; i++, sh++) { + for (i = 0, sh = V_pf_srchash; i <= pf_srchashmask; i++, sh++) { PF_HASHROW_LOCK(sh); LIST_FOREACH_SAFE(cur, &sh->nodes, entry, next) if (cur->states <= 0 && cur->expire <= time_uptime) { @@ -1614,7 +1612,7 @@ relock: PF_HASHROW_UNLOCK(ih); /* Return when we hit end of hash. */ - if (++i > V_pf_hashmask) { + if (++i > pf_hashmask) { V_pf_status.states = uma_zone_get_cur(V_pf_state_z); return (0); } Index: sys/netpfil/pf/pf_ioctl.c =================================================================== --- sys/netpfil/pf/pf_ioctl.c (revision 251794) +++ sys/netpfil/pf/pf_ioctl.c (working copy) @@ -1577,7 +1577,7 @@ DIOCCHANGERULE_error: struct pfioc_state_kill *psk = (struct pfioc_state_kill *)addr; u_int i, killed = 0; - for (i = 0; i <= V_pf_hashmask; i++) { + for (i = 0; i <= pf_hashmask; i++) { struct pf_idhash *ih = &V_pf_idhash[i]; relock_DIOCCLRSTATES: @@ -1622,7 +1622,7 @@ relock_DIOCCLRSTATES: break; } - for (i = 0; i <= V_pf_hashmask; i++) { + for (i = 0; i <= pf_hashmask; i++) { struct pf_idhash *ih = &V_pf_idhash[i]; relock_DIOCKILLSTATES: @@ -1726,7 +1726,7 @@ relock_DIOCKILLSTATES: p = pstore = malloc(ps->ps_len, M_TEMP, M_WAITOK); nr = 0; - for (i = 0; i <= V_pf_hashmask; i++) { + for (i = 0; i <= pf_hashmask; i++) { struct pf_idhash *ih = &V_pf_idhash[i]; PF_HASHROW_LOCK(ih); @@ -3078,7 +3078,7 @@ DIOCCHANGEADDR_error: uint32_t i, nr = 0; if (psn->psn_len == 0) { - for (i = 0, sh = V_pf_srchash; i < V_pf_srchashmask; + for (i = 0, sh = V_pf_srchash; i < pf_srchashmask; i++, sh++) { PF_HASHROW_LOCK(sh); LIST_FOREACH(n, &sh->nodes, entry) @@ -3090,7 +3090,7 @@ DIOCCHANGEADDR_error: } p = pstore = malloc(psn->psn_len, M_TEMP, M_WAITOK); - for (i = 0, sh = V_pf_srchash; i < V_pf_srchashmask; + for (i = 0, sh = V_pf_srchash; i < pf_srchashmask; i++, sh++) { PF_HASHROW_LOCK(sh); LIST_FOREACH(n, &sh->nodes, entry) { @@ -3147,7 +3147,7 @@ DIOCCHANGEADDR_error: struct pf_src_node *sn; u_int i, killed = 0; - for (i = 0, sh = V_pf_srchash; i < V_pf_srchashmask; + for (i = 0, sh = V_pf_srchash; i < pf_srchashmask; i++, sh++) { /* * XXXGL: we don't ever acquire sources hash lock @@ -3331,7 +3331,7 @@ pf_clear_states(void) struct pf_state *s; u_int i; - for (i = 0; i <= V_pf_hashmask; i++) { + for (i = 0; i <= pf_hashmask; i++) { struct pf_idhash *ih = &V_pf_idhash[i]; relock: PF_HASHROW_LOCK(ih); @@ -3366,7 +3366,7 @@ pf_clear_srcnodes(struct pf_src_node *n) struct pf_state *s; int i; - for (i = 0; i <= V_pf_hashmask; i++) { + for (i = 0; i <= pf_hashmask; i++) { struct pf_idhash *ih = &V_pf_idhash[i]; PF_HASHROW_LOCK(ih); @@ -3382,7 +3382,7 @@ pf_clear_srcnodes(struct pf_src_node *n) if (n == NULL) { struct pf_srchash *sh; - for (i = 0, sh = V_pf_srchash; i < V_pf_srchashmask; + for (i = 0, sh = V_pf_srchash; i < pf_srchashmask; i++, sh++) { PF_HASHROW_LOCK(sh); LIST_FOREACH(n, &sh->nodes, entry) { --bFsKbPszpzYNtEU6--