From owner-freebsd-pf@freebsd.org Mon Jun 29 11:05:23 2015 Return-Path: Delivered-To: freebsd-pf@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 5769198F7AD for ; Mon, 29 Jun 2015 11:05:23 +0000 (UTC) (envelope-from freebsd-pf@dino.sk) Received: from mailhost.netlabit.sk (mailhost.netlabit.sk [84.245.65.72]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id E44F81595 for ; Mon, 29 Jun 2015 11:05:22 +0000 (UTC) (envelope-from freebsd-pf@dino.sk) Received: from zeta.dino.sk (fw1.dino.sk [84.245.95.252]) (AUTH: LOGIN milan) by mailhost.netlabit.sk with ESMTPA; Mon, 29 Jun 2015 13:05:19 +0200 id 000F19B1.5591266F.00003A35 Date: Mon, 29 Jun 2015 13:05:19 +0200 From: Milan Obuch To: Daniel Hartmeier Cc: Ian FREISLICH , freebsd-pf@freebsd.org Subject: Re: Large scale NAT with PF - some weird problem Message-ID: <20150629130519.168f0efc@zeta.dino.sk> In-Reply-To: <20150629104614.GD22693@insomnia.benzedrine.ch> References: <20150620182432.62797ec5@zeta.dino.sk> <20150619091857.304b707b@zeta.dino.sk> <14e119e8fa8.2755.abfb21602af57f30a7457738c46ad3ae@capeaugusta.com> <20150621133236.75a4d86d@zeta.dino.sk> <20150629104614.GD22693@insomnia.benzedrine.ch> X-Mailer: Claws Mail 3.11.1 (GTK+ 2.24.27; i386-portbld-freebsd10.1) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-pf@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Technical discussion and general questions about packet filter \(pf\)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Jun 2015 11:05:23 -0000 On Mon, 29 Jun 2015 12:46:14 +0200 Daniel Hartmeier wrote: > On Sun, Jun 21, 2015 at 01:32:36PM +0200, Milan Obuch wrote: > > > One observation, on pfctl -vs info output - when src-limit counters > > rises to 30 or so, I am getting first messages someone has problem. > > Is it only coincidence or is there really some relation to my > > problem? > > This might be a clue. That counter shouldn't increase. It means > something triggered a PFRES_SRCLIMIT. > OK, I will keep an eye on this for some time too. I do not have much knowledge regarding pf internals, so my observations may or may not be relevant, just as my questions. > Are you using source tracking for anything else besides the NAT sticky > address feature? > I reviewed recently some pfctl output and I think this mechanism is used in other scenarios as well, namely following one for ssh protection: block in quick on $if_ext inet proto tcp from to any port 22 pass in on $if_ext proto tcp to x.y.24.0/22 port ssh flags S/SA keep state (max-src-conn 10, max-src-conn-rate 5/5, overload flush) (somewhat mail-mangled, but I am sure you know this one) > If not, the only explanation for a PFRES_SRCLIMIT in a translation > rule is a failure of pf.c pf_insert_src_node(), which could only be an > allocation failure with uma_zalloc(). > > Do you see any allocation failures? Log entries about uma, "source > nodes limit reached"? How about vmstat -m? > Where should these failures come? I see nothing in /var/log/messages. As for 'vmstat -m', I think following lines could be of some interest: Type InUse MemUse HighUse Requests Size(s) pf_hash 3 1728K - 3 pf_temp 0 0K - 955 32,64 pf_ifnet 21 7K - 282 128,256,2048 pf_osfp 1130 102K - 6780 32,128 pf_rule 222 129K - 468 128,1024 pf_table 9 18K - 35 2048 but no idea how to interpret this. Regards, Milan