From owner-freebsd-net@FreeBSD.ORG Sat Mar 1 14:23:41 2008 Return-Path: Delivered-To: net@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id AE3F21065682 for ; Sat, 1 Mar 2008 14:23:41 +0000 (UTC) (envelope-from barney_cordoba@yahoo.com) Received: from web63913.mail.re1.yahoo.com (web63913.mail.re1.yahoo.com [69.147.97.128]) by mx1.freebsd.org (Postfix) with SMTP id 79AD38FC2E for ; Sat, 1 Mar 2008 14:23:41 +0000 (UTC) (envelope-from barney_cordoba@yahoo.com) Received: (qmail 72787 invoked by uid 60001); 1 Mar 2008 14:23:40 -0000 DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=X-YMail-OSG:Received:Date:From:Subject:To:In-Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:Message-ID; b=JPDAKHxv7IhgxhiFBhdnF2s8C/VB3ymVn4ycPKLIzXOjt4NgAGp8Lf3qKQ5OhlYtaTsPVPnONMvzrRIkOx/AQgHBdPIchP5E9zc6GI4FZjNvn4lIXh6RXivs2L+4QqA1jqRJ5wkDQMsFW2s1oou+iXFodAACqrDmhNAcD2tWC3c=; X-YMail-OSG: eeV_jaEVM1lE.vJkhIsi8jXCITehSXHF3h8eAgSU81EyIZK1tEQWY2eBA1pLcov5gaby5yJcv9R.vK5Ljw.oOeOtT5LsFb17GzHiOw4YjbDzfxnLxTw- Received: from [98.203.28.38] by web63913.mail.re1.yahoo.com via HTTP; Sat, 01 Mar 2008 06:23:40 PST Date: Sat, 1 Mar 2008 06:23:40 -0800 (PST) From: Barney Cordoba To: net@freebsd.org In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 8bit Message-ID: <614952.71759.qm@web63913.mail.re1.yahoo.com> Cc: Subject: Re: FBSD 1GBit router? X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 01 Mar 2008 14:23:41 -0000 --- Ingo Flaschberger wrote: > > >> I have a 1.2Ghz Pentium-M appliance, with 4x > 32bit, 33MHz pci intel e1000 > >> cards. > >> With maximum tuning I can "route" ~400mbps with > big packets and ~80mbps > >> with 64byte packets. > >> around 100kpps, whats not bad for a pci > architecture. > >> > >> To reach higher bandwiths, better busses are > needed. > >> pci-express cards are currently the best choice. > >> one dedicated pci-express lane (1.25gbps) has > more bandwith than a whole > >> 32bit, 33mhz pci-bus. > > > > Like you say routing 400 Mb/s is close to the max > of the PCI bus, which > > has a theoretical max of 33*4*8 ~ 1Gbps. Now > routing is 500Mb/s in, 500Mb/s > > out. So you are within 80% of the bus-max, not > counting memory-access and > > others. > > yes. > > > PCI express will give you a bus per PCI-E device > into a central hub, thus > > upping the limit to the speed of the FrontSideBus > in Intel architectures. > > Which at the moment is a lot higher than what a > single PCI bus does. > > Thats why my next router will be based at this box: > http://www.axiomtek.com/products/ViewProduct.asp?view=429 > > Hopefully there will be direct memory bus connected > nic's in future. > (HyperTransport connected nic's) > > > What it does not explain is why you can only get > 80Mb/s with 64byte packets, > > which would suggest other bottlenecks than just > the bus. To clarify this, you seem to leave out PCI-X, which is an 8Mb/s bus, which is certainly able to route or bridge a gigabit of traffic. PCIe 4x has an unencoded data rate of 16Gb/s and PCIe 8x is 64Gb/s, which is more than enough to do gigabit and 10gig. PCI can BURST to 1Gb/s and PCI-X can BURST to 8Gb/s, but bursts are limited, so its not possible to achieve full bandwidth. The more devices on the bus, the less throughput you'll get due to contention. A more limiting factor for routing is packets/second, not the actual throughput. Its foolhardy to use a NIC that doesn't have enough bandwidth, so "usually", the limiting factor is the CPUs ability to process the packets regardless of their size. We bridged 1 million PPS on a FreeBSD 4.x machine with a 2.8Ghz opteron and PCI-x intel cards. A 7.0 system can do about 20% less (of course you don't lose the keyboard on a 7.0 system as you do on the 4.x system!). So I'd assume a 3Ghz xeon could likely do close to 1 million pps on a 7.0 system. Routing will be a bit slower. Also be advised that implementation is an issue with bus througput. I've tested systems with both PCIe and PCIx and the PCIx gave higher thoughput even though the PCIe is theoretically faster. MB/chipset design is a factor in the utilization capability of the bus. Barney ____________________________________________________________________________________ Never miss a thing. Make Yahoo your home page. http://www.yahoo.com/r/hs