From owner-freebsd-net@FreeBSD.ORG Mon Nov 28 10:52:55 2005 Return-Path: X-Original-To: net@FreeBSD.org Delivered-To: freebsd-net@FreeBSD.ORG Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id E96CA16A41F; Mon, 28 Nov 2005 10:52:55 +0000 (GMT) (envelope-from glebius@FreeBSD.org) Received: from cell.sick.ru (cell.sick.ru [217.72.144.68]) by mx1.FreeBSD.org (Postfix) with ESMTP id 1658B43D55; Mon, 28 Nov 2005 10:52:54 +0000 (GMT) (envelope-from glebius@FreeBSD.org) Received: from cell.sick.ru (glebius@localhost [127.0.0.1]) by cell.sick.ru (8.13.3/8.13.3) with ESMTP id jASAqot1099127 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Mon, 28 Nov 2005 13:52:51 +0300 (MSK) (envelope-from glebius@FreeBSD.org) Received: (from glebius@localhost) by cell.sick.ru (8.13.3/8.13.1/Submit) id jASAqo3v099126; Mon, 28 Nov 2005 13:52:50 +0300 (MSK) (envelope-from glebius@FreeBSD.org) X-Authentication-Warning: cell.sick.ru: glebius set sender to glebius@FreeBSD.org using -f Date: Mon, 28 Nov 2005 13:52:50 +0300 From: Gleb Smirnoff To: .@babolo.ru Message-ID: <20051128105250.GP25711@cell.sick.ru> References: <20051128094727.GK25711@cell.sick.ru> <1133174561.369095.16075.nullmailer@cicuta.babolo.ru> Mime-Version: 1.0 Content-Type: text/plain; charset=koi8-r Content-Disposition: inline In-Reply-To: <1133174561.369095.16075.nullmailer@cicuta.babolo.ru> User-Agent: Mutt/1.5.6i Cc: Vsevolod Lobko , rwatson@FreeBSD.org, Ruslan Ermilov , net@FreeBSD.org Subject: Re: parallelizing ipfw table X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Nov 2005 10:52:56 -0000 On Mon, Nov 28, 2005 at 01:42:41PM +0300, .@babolo.ru wrote: .> > On Mon, Nov 28, 2005 at 08:27:32AM +0200, Ruslan Ermilov wrote: .> > R> > Can you try my patch? Since it reduces the total number of mutex .> > R> > operations it should be a win on UP, too. .> > R> We're currently based on 4.x. You can try it yourself: create .> > R> a table with 10000 entries and with value 13. Then write a .> > R> ruleset with 13 rules that look up this table so that the last .> > R> rule looks it up with value 13, and do a benchmark. Let me .> > R> know what are results with and without caching. .> > Such kind of firewall looks like unoptimized. Why should we optimize the .> > code for non-optimized setups. Can't we avoid looking into one table .> > 13 times each packet? .> .> add 47400 pipe 47400 ip from table(0, 0) to any .> add 47401 pipe 47401 ip from table(0, 1) to any .> add 47402 pipe 47402 ip from table(0, 2) to any .> add 47403 pipe 47403 ip from table(0, 3) to any .> add 47404 pipe 47404 ip from table(0, 4) to any .> add 47405 pipe 47405 ip from table(0, 5) to any .> add 47406 pipe 47406 ip from table(0, 6) to any .> add 47407 pipe 47407 ip from table(0, 7) to any .> add 47408 pipe 47408 ip from table(0, 8) to any .> add 47409 pipe 47409 ip from table(0, 9) to any .> .> for different traffic consumers listed in table(0) I understand now. Ruslan has sent me a sample setup, too. Anyway, the current optimization is broken on SMP, because it stores the cache in the table itself. Parallel processing of the different packets on SMP breaks the optimization, since different instances of ipfw_chk() trash the cached addr one after another. I have two ideas about this. First, store the cache on stack. Second, utilize the table entry value in the rule. In this case your block can be converted to: add N pipe \$val ip from table(0) to any \$val means the value of the entry in the table. -- Totus tuus, Glebius. GLEBIUS-RIPN GLEB-RIPE