From owner-freebsd-pf@FreeBSD.ORG Wed Aug 4 07:50:25 2010 Return-Path: Delivered-To: freebsd-pf@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4A5951065680 for ; Wed, 4 Aug 2010 07:50:25 +0000 (UTC) (envelope-from dhartmei@insomnia.benzedrine.cx) Received: from insomnia.benzedrine.cx (106-30.3-213.fix.bluewin.ch [213.3.30.106]) by mx1.freebsd.org (Postfix) with ESMTP id 211568FC14 for ; Wed, 4 Aug 2010 07:50:23 +0000 (UTC) Received: from insomnia.benzedrine.cx (localhost.benzedrine.cx [127.0.0.1]) by insomnia.benzedrine.cx (8.14.1/8.13.4) with ESMTP id o747oMkL011797 (version=TLSv1/SSLv3 cipher=DHE-DSS-AES256-SHA bits=256 verify=NO) for ; Wed, 4 Aug 2010 09:50:22 +0200 (MEST) Received: (from dhartmei@localhost) by insomnia.benzedrine.cx (8.14.1/8.12.10/Submit) id o747oMMe000787 for freebsd-pf@freebsd.org; Wed, 4 Aug 2010 09:50:22 +0200 (MEST) Resent-From: dhartmei@benzedrine.cx Resent-Date: Wed, 4 Aug 2010 09:50:22 +0200 Resent-Message-ID: <20100804075022.GC3834@insomnia.benzedrine.cx> Resent-To: freebsd-pf@freebsd.org Date: Wed, 4 Aug 2010 09:49:15 +0200 From: Daniel Hartmeier To: "Rushan R. Shaymardanov" Message-ID: <20100804074915.GB3834@insomnia.benzedrine.cx> References: <4C58D456.5010701@clink.ru> <20100804062907.GA3834@insomnia.benzedrine.cx> <4C591915.7050807@clink.ru> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4C591915.7050807@clink.ru> User-Agent: Mutt/1.5.12-2006-07-14 Cc: freebsd-pf@freebsd.org Subject: Re: Keeping state of tcp connections X-BeenThere: freebsd-pf@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: "Technical discussion and general questions about packet filter \(pf\)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 04 Aug 2010 07:50:25 -0000 On Wed, Aug 04, 2010 at 01:39:01PM +0600, Rushan R. Shaymardanov wrote: > I think, here's the problem. This connection - is that I using for > executing pfctl -ss, so "expires in" must be about 24 hrs like in your > example. But as you can see, the value is 4:13 here. When I execute > command again, I get another value: > > gw ~ # pfctl -vvss | grep -A 3 "192.168.50.225" | grep -A 3 "172.16.11.1:22" > all tcp 172.16.11.1:22 <- 192.168.50.225:49021 ESTABLISHED:ESTABLISHED > [3592206868 + 333376] wscale 9 [2021010803 + 1049600] wscale 6 > age 00:21:58, expires in 02:35:27, 2119:4305 pkts, 126728:2373444 > bytes, rule 293 > id: 4c46689c7daad5e7 creatorid: f74cdd39 > > Every time I execute this command, the value changes from 1:xx to 4:xx. Are you using adaptive timeouts? # pfctl -st | grep adaptive What's your state limit? # pfctl -sm | grep states When the problem occurs, how many states do you have? # pfctl -si | grep current If this value is higher than the adaptive.start value, timeout values get scaled down, which could possibly explain what you see. If so, try increasing the state limit and/or the adaptive thresholds: set limit states 50000 set timeout { adaptive.start 50000 adaptive.end 60000 } Other causes: do you use pfsync to synchronize states between multiple pf machines? If so, are their clocks synchronized and accurate? Did you change any (kernel) settings related to time, like HZ or such? Is your time synchronized in a special way, i.e. not just by ntpd? Daniel