Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 29 Jun 2015 13:05:19 +0200
From:      Milan Obuch <freebsd-pf@dino.sk>
To:        Daniel Hartmeier <daniel@benzedrine.ch>
Cc:        Ian FREISLICH <ian.freislich@capeaugusta.com>, freebsd-pf@freebsd.org
Subject:   Re: Large scale NAT with PF - some weird problem
Message-ID:  <20150629130519.168f0efc@zeta.dino.sk>
In-Reply-To: <20150629104614.GD22693@insomnia.benzedrine.ch>
References:  <20150620182432.62797ec5@zeta.dino.sk> <20150619091857.304b707b@zeta.dino.sk> <14e119e8fa8.2755.abfb21602af57f30a7457738c46ad3ae@capeaugusta.com> <E1Z6dHz-0000uu-D8@clue.co.za> <20150621133236.75a4d86d@zeta.dino.sk> <20150629104614.GD22693@insomnia.benzedrine.ch>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, 29 Jun 2015 12:46:14 +0200
Daniel Hartmeier <daniel@benzedrine.ch> wrote:

> On Sun, Jun 21, 2015 at 01:32:36PM +0200, Milan Obuch wrote:
> 
> > One observation, on pfctl -vs info output - when src-limit counters
> > rises to 30 or so, I am getting first messages someone has problem.
> > Is it only coincidence or is there really some relation to my
> > problem?
> 
> This might be a clue. That counter shouldn't increase. It means
> something triggered a PFRES_SRCLIMIT.
>

OK, I will keep an eye on this for some time too. I do not have much
knowledge regarding pf internals, so my observations may or may not be
relevant, just as my questions.

> Are you using source tracking for anything else besides the NAT sticky
> address feature?
>

I reviewed recently some pfctl output and I think this mechanism is
used in other scenarios as well, namely following one for ssh
protection:

block in quick on $if_ext inet proto tcp from <abusive_ips> to any port
22

pass in on $if_ext proto tcp to x.y.24.0/22 port ssh flags S/SA
keep state (max-src-conn 10, max-src-conn-rate 5/5, overload
<abusive_ips> flush)

(somewhat mail-mangled, but I am sure you know this one)

> If not, the only explanation for a PFRES_SRCLIMIT in a translation
> rule is a failure of pf.c pf_insert_src_node(), which could only be an
> allocation failure with uma_zalloc().
>
> Do you see any allocation failures? Log entries about uma, "source
> nodes limit reached"? How about vmstat -m?
>

Where should these failures come? I see nothing in /var/log/messages.

As for 'vmstat -m', I think following lines could be of some interest:

    Type InUse MemUse HighUse Requests  Size(s)
 pf_hash     3  1728K       -        3  
 pf_temp     0     0K       -      955  32,64
pf_ifnet    21     7K       -      282  128,256,2048
 pf_osfp  1130   102K       -     6780  32,128
 pf_rule   222   129K       -      468  128,1024
pf_table     9    18K       -       35  2048

but no idea how to interpret this.

Regards,
Milan



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20150629130519.168f0efc>