From owner-freebsd-security Sat Aug 21 23:51:42 1999 Delivered-To: freebsd-security@freebsd.org Received: from gndrsh.dnsmgr.net (GndRsh.dnsmgr.net [198.145.92.4]) by hub.freebsd.org (Postfix) with ESMTP id C0CA015009 for ; Sat, 21 Aug 1999 23:51:36 -0700 (PDT) (envelope-from freebsd@gndrsh.dnsmgr.net) Received: (from freebsd@localhost) by gndrsh.dnsmgr.net (8.9.3/8.9.3) id XAA31700; Sat, 21 Aug 1999 23:49:10 -0700 (PDT) (envelope-from freebsd) From: "Rodney W. Grimes" Message-Id: <199908220649.XAA31700@gndrsh.dnsmgr.net> Subject: Re: multiple machines in the same network In-Reply-To: from Chris Dillon at "Aug 22, 1999 01:34:47 am" To: cdillon@wolves.k12.mo.us (Chris Dillon) Date: Sat, 21 Aug 1999 23:49:10 -0700 (PDT) Cc: wes@softweyr.com (Wes Peters), cliff@steam.com (Cliff Skolnick), service_account@yahoo.com (jay d), yurtesen@ispro.net.tr (Evren Yurtesen), freebsd-security@FreeBSD.ORG X-Mailer: ELM [version 2.4ME+ PL54 (25)] MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: owner-freebsd-security@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.org > On Sat, 21 Aug 1999, Wes Peters wrote: > > > You obviously didn't follow the links. The HP ProCurve I mentioned is $1880 > > for 40 switched 10/100 ports with layer 3 functionality and VLAN support. > > That's $47 port port, much lower than your $250/port, with a LOT more performance > > also. The Tolly Group recently tested it and found it capable of sustaining > > full wire speed on all 40 ports. I'll just be your PCI-bus box isn't going > > to hit 4 Gbps throughput. > > I noticed the only "L3 support" from the spec sheets of the 4000M and > 8000M is IGMP snooping to control multicast traffic, and "protocol > filtering" only on the 8000M. Nothing close to IP routing, however > (not that you said it did, specifically, just clarifying). When the > Tolly Group said they could "sustain full wire speed on all 40 ports", > was that testing each one at a time or all at once? My math isn't > quite warped enough to allow 40 100Mbit/FD ports to all be saturated > with only a 3.8Gbit backplane, unless local switching occurs on each > of the port modules, and even then the "throughput test" would have to > take that into account and not try to move too much data across the > backplane. Your making a common mistake here when an ``ALL PORTS FULL LOAD'' test is done, if you have 40 ports all being sent data at 100MB/sec that data is going to have to come out on 40 ports someplace, so you only need 4Gbit/sec of backplane to do this. Thats 4G bytes of data in, 4G accross the backplane, and 4G back out of the box. Maybe a drawing would help: rxpair of port 1 > +---------+ > txpair of port n rxpair of port 2 > | | .... rxpair of port 3 > | Fabric | > txpair of port 3 ... | | > txpair of port 2 rxpair of port n > +---------+ > txpair of port 1 As you can see the Fabric only has to handle 40 x 100Mb/s to keep all 40 ports busy at full duplex. The 3.8 Gb/s spec comes up a little short, but only buy 2 ports... and it had better be darned efficent as far as overhead goes... Allowing the port cards to short circuit bridge (and every switch chip set I have looked at does this) makes it easy to pass this test, infact you can do it with 0 load on the backplane. My drawing above tends to put the maximal load on a switches backplane, but unless the vendor tells you exactly how they tested the benchmark is like any other benchmark without all the nitty gritty details, total sales and marketing propoganda. > You may also notice that the HP ProCurve 9304M and 9308M Routing > Switches (these DO have IP/IPX routing, but they certainly aren't > cheap... nice kit, BTW), bear an uncanny resemblance in both looks, > specs, and a digit of their model name to the Foundry Networks BigIron > 4000 and 8000, respectively. :-) -- Rod Grimes - KD7CAX - (RWG25) rgrimes@gndrsh.dnsmgr.net To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-security" in the body of the message