From owner-freebsd-pf@FreeBSD.ORG Fri Mar 31 14:40:20 2006 Return-Path: X-Original-To: freebsd-pf@freebsd.org Delivered-To: freebsd-pf@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 0705D16A420 for ; Fri, 31 Mar 2006 14:40:20 +0000 (UTC) (envelope-from Greg.Hennessy@nviz.net) Received: from smtp.nildram.co.uk (smtp.nildram.co.uk [195.112.4.54]) by mx1.FreeBSD.org (Postfix) with ESMTP id 7534043D4C for ; Fri, 31 Mar 2006 14:40:19 +0000 (GMT) (envelope-from Greg.Hennessy@nviz.net) Received: from gw2.local.net (unknown [62.3.210.253]) by smtp.nildram.co.uk (Postfix) with ESMTP id 5B27C335082 for ; Fri, 31 Mar 2006 15:40:15 +0100 (BST) From: "Greg Hennessy" To: "'Christopher McGee'" Date: Fri, 31 Mar 2006 15:40:16 +0100 Message-ID: <000001c654d1$06bc4e60$0a00a8c0@thebeast> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit X-Mailer: Microsoft Office Outlook 11 In-Reply-To: <442D35DE.9060707@xecu.net> Thread-Index: AcZUy3aCAQJonXkkTumkEJ4uj6397AAAy0qw X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.2670 X-OriginalArrivalTime: 31 Mar 2006 14:40:16.0454 (UTC) FILETIME=[06BC4E60:01C654D1] Cc: freebsd-pf@freebsd.org Subject: RE: Traffic mysteriously dropping X-BeenThere: freebsd-pf@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: "Technical discussion and general questions about packet filter \(pf\)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 31 Mar 2006 14:40:20 -0000 > I thought the most current recommendations were not to use > polling? I thought this was something handled by most new hardware? I would use polling in any situation with the likelyhood of a high packet rate, its integrated directly in the em NIC drivers as of 6.x and works a treat through ifconfig. > > > Altq is compiled in on this machine also, however, when not > being used, I see the same result. I've seen many stories of > 600Meg/sec+, however, up until now, I have not been able to > accomplish it. Hmmm, that sounds like a policy issue, 5.4 and em's iperf at > 900 meg/sec. What speed processor is driving this ? I assume you're using PCI-X everywhere. > I have switched this back he default. I get the same > result. If I move the rule even 1 or 2 down in the list, > traffic starts dropping on the http connections. I will > leave it this way though. Hmmm, that sounds more and more like a state mismatch issue. What is your default block rule catching ? It should give you an idea pretty quick regarding state mismatches due to overlapping rules. I assume your 1st rule is block log all If not, it should be. > > >Are all your stateful tcp rules using flags S/SA to establish state ? > > > > > > > Not all of the rules are stateful, but the ones that are just > use the "keep state" directive, they are not using S/SA. Is > this the recommended method? Definitely, Daniel H has recently described the reasons why creating tcp state on anything other than S/SA is a bad idea, especially with TCP window scaling. > I have read many of the > examples and docs, and it appears this is done both ways > depending on where you read it. Personally I would use flags S/SA for all stateful tcp rules. > > > We have a lot of smtp traffic sometimes, so for those times, > we have bumped up the state limit, however, at times like my > testing last night, there were between 4000 and 5000 states, > a few hundred at a time would be my testing. It may be worth using something like cricket to track the amount of state table entries. > > >With nearly 400 firewall rules, I would suggest that there's > scope for > >reviewing order and the judicious use of quick to trim the > policy into > >something more manageable. > > > > > Well, this is something that was inherited, and therefore is > taking much time to fix, however, the rules will be trimmed. > I've already made extensive use of tables, and > re-ordered/trimmed certain unnecessary things. If you havent done so already, start using tags in conjunction with generic egress rules on each interface. This will reduce the rulebase in size a lot. Greg