From owner-freebsd-pf@FreeBSD.ORG Tue Jun 23 09:15:25 2015 Return-Path: Delivered-To: freebsd-pf@nevdull.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 82B9D74E for ; Tue, 23 Jun 2015 09:15:25 +0000 (UTC) (envelope-from freebsd-pf@dino.sk) Received: from mailhost.netlabit.sk (mailhost.netlabit.sk [84.245.65.72]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id F0ADEA2E; Tue, 23 Jun 2015 09:15:24 +0000 (UTC) (envelope-from freebsd-pf@dino.sk) Received: from zeta.dino.sk (fw1.dino.sk [84.245.95.252]) (AUTH: LOGIN milan) by mailhost.netlabit.sk with ESMTPA; Tue, 23 Jun 2015 11:15:21 +0200 id 00EB08E8.558923A9.00011946 Date: Tue, 23 Jun 2015 11:15:20 +0200 From: Milan Obuch To: Ermal =?ISO-8859-1?Q?Lu=E7i?= Cc: Ian FREISLICH , freebsd-pf@freebsd.org Subject: Re: Large scale NAT with PF - some weird problem Message-ID: <20150623111520.1679794b@zeta.dino.sk> In-Reply-To: References: <20150623073856.334ebd61@zeta.dino.sk> <20150621133236.75a4d86d@zeta.dino.sk> <20150620182432.62797ec5@zeta.dino.sk> <20150619091857.304b707b@zeta.dino.sk> <14e119e8fa8.2755.abfb21602af57f30a7457738c46ad3ae@capeaugusta.com> <20150621195753.7b162633@zeta.dino.sk> <20150623101225.4bc7f2d0@zeta.dino.sk> X-Mailer: Claws Mail 3.11.1 (GTK+ 2.24.27; i386-portbld-freebsd10.1) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-BeenThere: freebsd-pf@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Technical discussion and general questions about packet filter \(pf\)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 23 Jun 2015 09:15:25 -0000 On Tue, 23 Jun 2015 10:57:16 +0200 Ermal Lu=C3=A7i wrote: > On Tue, Jun 23, 2015 at 10:12 AM, Milan Obuch > wrote: >=20 > > On Tue, 23 Jun 2015 09:49:57 +0200 > > Ian FREISLICH wrote: [ snip ] > > > How is your NAT rule defined? I had a closer look at the way I > > > did it: > > > > > > nat on vlan46 from 10.8.0.0/15 to ! -> xx.xx.xx.xx/24 > > > round-robin sticky-address > > > > > > I think you may be missing the "round-robin" that spreads the > > > mapping over your pool. The manual says that when more than 1 > > > address is specified, round-robin is the only pool type allowed, > > > it does not say that when more than 1 address is specified this > > > is the default pool option. > > > > > > > Thanks for hint, however, this is not the case I think. My > > definition is > > > > nat on $if_ext from to any -> $pool_ext round-robin > > sticky-address > > > > where contains contains some /24 segments from 10.0.0.0/8 > > range and one /24 and one /15 segment from 172.16.0.0/12 range, > > $pool_ext is one /23 public segment. > > > > > You can check your state table to see if it is indeed round-robin. > > > > > > #pfctl -s sta |grep " (" > > > ... > > > all tcp a.b.c.d:53802 (10.0.0.220:42808) -> 41.246.55.66:24 > > > ESTABLISHED:ESTABLISHED all tcp a.b.c.e:60794 (10.0.0.38:47825) -> > > > 216.58.223.10:443 ESTABLISHED:FIN_WAIT_2 > > > > > > If all your addresses "a.b.c.X" are the same, it's not round-robin > > > and that's your problem. > > > > > > > Well, this is something I do not fully understand. If my pool were > > a.b.c.0/24, then what you wrote could not be any other way - I think > > this is not what you meant. Or did you think there will be only one > > IP used? That's definitelly not the case, I see many IPs from my /23 > > segment here. > > > > One strange thing occured, however - it looks like if one IP from > > this /23 range gets used, trouble occurs. I do pfctl -k and pfctl -K > > for this address and all is well again. As long as this one IP is > > not used, everything works. When it gets used again, voila, trouble > > again. > > > > > Can you check if you are reaching the limits on source entries > set limit src-nodes 2000 >=20 > sets the maximum number of entries in the memory pool used > for tracking source IP addresses (generated by the sticky-address and > src.track options) to 2000. > Well, I think it is big enough - pfctl -s memory: states hard limit 500000 src-nodes hard limit 100000 frags hard limit 50000 tables hard limit 5000 table-entries hard limit 500000 Excerpt from pfctl -vs info: Source Tracking Table current entries 418 =20 searches 1435901 36.2/s inserts 4577 0.1/s removals 4159 0.1/s My gut feeling is there is just much more space than necessary, but this should not hurt, I think. Thanks, Milan