Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 21 Aug 1999 23:49:10 -0700 (PDT)
From:      "Rodney W. Grimes" <freebsd@gndrsh.dnsmgr.net>
To:        cdillon@wolves.k12.mo.us (Chris Dillon)
Cc:        wes@softweyr.com (Wes Peters), cliff@steam.com (Cliff Skolnick), service_account@yahoo.com (jay d), yurtesen@ispro.net.tr (Evren Yurtesen), freebsd-security@FreeBSD.ORG
Subject:   Re: multiple machines in the same network
Message-ID:  <199908220649.XAA31700@gndrsh.dnsmgr.net>
In-Reply-To: <Pine.BSF.4.10.9908220043370.79245-100000@mail.wolves.k12.mo.us> from Chris Dillon at "Aug 22, 1999 01:34:47 am"

next in thread | previous in thread | raw e-mail | index | archive | help
> On Sat, 21 Aug 1999, Wes Peters wrote:
> 
> > You obviously didn't follow the links.  The HP ProCurve I mentioned is $1880
> > for 40 switched 10/100 ports with layer 3 functionality and VLAN support.
> > That's $47 port port, much lower than your $250/port, with a LOT more performance
> > also.  The Tolly Group recently tested it and found it capable of sustaining
> > full wire speed on all 40 ports.  I'll just be your PCI-bus box isn't going
> > to hit 4 Gbps throughput.
> 
> I noticed the only "L3 support" from the spec sheets of the 4000M and
> 8000M is IGMP snooping to control multicast traffic, and "protocol
> filtering" only on the 8000M.  Nothing close to IP routing, however
> (not that you said it did, specifically, just clarifying).  When the
> Tolly Group said they could "sustain full wire speed on all 40 ports",
> was that testing each one at a time or all at once?  My math isn't
> quite warped enough to allow 40 100Mbit/FD ports to all be saturated
> with only a 3.8Gbit backplane, unless local switching occurs on each
> of the port modules, and even then the "throughput test" would have to
> take that into account and not try to move too much data across the
> backplane.

Your making a common mistake here when an ``ALL PORTS FULL LOAD'' test
is done, if you have 40 ports all being sent data at 100MB/sec that
data is going to have to come out on 40 ports someplace, so you only need
4Gbit/sec of backplane to do this.  Thats 4G bytes of data in, 4G
accross the backplane, and 4G back out of the box.

Maybe a drawing would help:

rxpair of port 1  >    +---------+   > txpair of port n
rxpair of port 2  >    |         |       ....
rxpair of port 3  >    | Fabric  |   > txpair of port 3
   ...                 |         |   > txpair of port 2
rxpair of port n  >    +---------+   > txpair of port 1


As you can see the Fabric only has to handle 40 x 100Mb/s to
keep all 40 ports busy at full duplex.

The 3.8 Gb/s spec comes up a little short, but only buy 2 ports...
and it had better be darned efficent as far as overhead goes...

Allowing the port cards to short circuit bridge (and every switch
chip set I have looked at does this) makes it easy to pass this
test, infact you can do it with 0 load on the backplane.  My
drawing above tends to put the maximal load on a switches backplane,
but unless the vendor tells you exactly how they tested the benchmark
is like any other benchmark without all the nitty gritty details,
total sales and marketing propoganda.

> You may also notice that the HP ProCurve 9304M and 9308M Routing
> Switches (these DO have IP/IPX routing, but they certainly aren't
> cheap... nice kit, BTW), bear an uncanny resemblance in both looks,
> specs, and a digit of their model name to the Foundry Networks BigIron
> 4000 and 8000, respectively.

:-)


-- 
Rod Grimes - KD7CAX - (RWG25)                    rgrimes@gndrsh.dnsmgr.net


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-security" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199908220649.XAA31700>