From owner-freebsd-hackers Sun Feb 17 22:43:42 2002 Delivered-To: freebsd-hackers@freebsd.org Received: from avocet.prod.itd.earthlink.net (avocet.mail.pas.earthlink.net [207.217.120.50]) by hub.freebsd.org (Postfix) with ESMTP id 2F2C437B404; Sun, 17 Feb 2002 22:43:36 -0800 (PST) Received: from pool0305.cvx21-bradley.dialup.earthlink.net ([209.179.193.50] helo=mindspring.com) by avocet.prod.itd.earthlink.net with esmtp (Exim 3.33 #1) id 16chW0-0006pr-00; Sun, 17 Feb 2002 22:43:08 -0800 Message-ID: <3C70A270.117EFFB2@mindspring.com> Date: Sun, 17 Feb 2002 22:42:56 -0800 From: Terry Lambert X-Mailer: Mozilla 4.7 [en]C-CCK-MCD {Sony} (Win98; U) X-Accept-Language: en MIME-Version: 1.0 To: Luigi Rizzo Cc: David Greenman , Roy Sigurd Karlsbakk , Dag-Erling Smorgrav , Thomas Hurst , hiten@uk.FreeBSD.org, hackers@FreeBSD.ORG, freebsd-questions@FreeBSD.ORG Subject: Re: in-kernel HTTP Server for FreeBSD? References: <3C703A92.2EBD3E67@mindspring.com> <20020217170929.D80718@nexus.root.com> <3C7056F9.A9F37535@mindspring.com> <20020217173008.E80718@nexus.root.com> <20020217174010.B16041@iguana.icir.org> Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Sender: owner-freebsd-hackers@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.ORG Luigi Rizzo wrote: > On Sun, Feb 17, 2002 at 05:30:08PM -0800, David Greenman wrote: > > >I'll agree with your experience. At this point, the limiting > > >factor is PCI bandwith, at least for general purpose hardware. > > > > I haven't found PCI bandwidth to be a problem, either, at least when > > using gigabit ethernet NICs on 64bit and/or 66MHz PCI. When one writes an > > Correct, though Terry probably meant that "general purpose hw" > (read: cheap motherboards) usually have 32bit/33MHz PCI buses, so > they can easily become a bottleneck especially if they are shared > with other peripherals such as disk controllers, video acquisition > boards, or multiple ethernet boards. 32x33 would *definitely bottleneck, as you say, with 32x33 = 1Gbit max @ burst rate, which is not sustainable. Actually, I was talking about the Super Micro 2x64 bit PCI with two Tigon III cards, with TCP processing to completion at interrupt, the problem in doing fast forwarding of flows becomes the PCI bus bandwith, whose top end is 64x66 = 4.4Mbit/S burst. Almost all of that is eaten, pushing data between the cards. Now add to that the overhead of doing crypto processing on a Broadcom part, also over the PCI bus, and there goes all your bandwidth. The same goes for the Tyan Tiger II, also a 2x64PCI board. For 4 cards (you have to go Serverworks for more than 2x64 bit PCI slots), even a stupid router drowns at 4 cards, and barely handles 3 cards. For that to work, your front side bus has to be fairly fast, and even then, you spend all your time copying data around for routing tables, etc.. HP has 10Gbit copper parts today, and PCI-X is looking more like vaporware, and will only double 64x66 PCI performance, putting the cap at 8Gbit. HP 10 Gigbit parts: http://www.hp.com/rnd/news/0500.htm Or just search for "procurve". -- Terry To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-hackers" in the body of the message