Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 26 Jun 1998 17:47:28 -0500 (CDT)
From:      Chris Dillon <cdillon@wolves.k12.mo.us>
To:        Ulf Zimmermann <ulf@Alameda.net>
Cc:        Atipa <freebsd@atipa.com>, hackers@FreeBSD.ORG
Subject:   Re: Will 8 Intel EtherExpress PRO 10/100's be a problem?
Message-ID:  <Pine.BSF.3.96.980626173407.14997A-100000@duey.hs.wolves.k12.mo.us>
In-Reply-To: <19980626153112.B24252@Alameda.net>

next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, 26 Jun 1998, Ulf Zimmermann wrote:

> On Fri, Jun 26, 1998 at 11:03:01AM -0500, Chris Dillon wrote:
> > On Thu, 25 Jun 1998, Atipa wrote:
> > 
> > > 
> > > > I really hope -hackers is the best place for this... i didn't want to
> > > > crosspost.
> > > > 
> > > > Within the next few months, i will be needing to set up a router for our
> > > > internal network, tying together 7 networks, with some room to grow. I
> > > > plan on buying a rather expensive chassis from Industrial Computer source.
> > > > It has an interesting partially-passive backplane with a PII-233 or faster
> > > > and chipset mounted on it (LX or BX chipset, I believe) with everything
> > > > else on a daughtercard and 9PCI/8ISA slots. Something like the model
> > > > 7520K9-44H-B4 with redundant power supplies.
> > > 
> > > Cool.
> > > 
> > > > Basically my questions are:  
> > > > 
> > > > 1) Will there be any problems with using three or more host-to-PCI
> > > > bridges? 
> > > 
> > > Maybe not in the kernel, but I'd start to worry about saturating your
> > > buses. You are really bumping up against some I/O bottlenecks in my
> > > estimation.
> > 
> > I'm rather hoping that three 133MB/sec PCI busses won't have any trouble
> > passing at max about 30MB/sec worth of data (10MB/sec per card, three
> > cards per bus).  Theoretically even one PCI bus could handle all 8 of
> > those cards.. _theoretically_... :-) 
> 
> Double that number, Full Duplex is what you usual now use in routers.
> I also wouldn't say the single bus is the problem, but the main PCI bus and
> the CPU will be a bottleneck. You will definatly not be able to run 8
> cards at full speed (8 x 10Mbyte/sec x 2 (FullDuplex) = 160 MByte/sec)

Doh.. I knew that, but didn't put that in my calculation.  Anyway, I'm not
needing full wire-speed from these things.  I think I'd be happy with
1/5th that. :-)  I'm expecting that if ftp.freebsd.org can do about,
5MB/sec on average, along with thousands of FTP clients, without breaking
a sweat on a PPro200, then a PII-350 or 400 should be able to do
line-speed at least between two networks at a time.  If and when I do
this, expect me to perform some benchmarks. :-)

As for the "main PCI bus" being the bottleneck, I'm really hoping they
used three host-to-PCI bridges, and not a single host-to-PCI bridge and
two PCI-to-PCI bridges.  Even if not, I could push about 100MB/sec across 
the bus (assuming the CPU could push that), and thats more than enough
for me.

I imagine a Cisco of _equal price_ wouldn't even come close to the
throughput I'm going to do.  I could be wrong, of course.


> -- 
> Ulf.
> 
> ---------------------------------------------------------------------
> Ulf Zimmermann, 1525 Pacific Ave., Alameda, CA-94501, #: 510-769-2936
> Alameda Networks, Inc. | http://www.Alameda.net  | Fax#: 510-521-5073
> 



-- Chris Dillon - cdillon@wolves.k12.mo.us - cdillon@inter-linc.net
/* FreeBSD: The fastest and most stable server OS on the planet.
   For Intel x86 and compatibles (SPARC and Alpha under development)
   (http://www.freebsd.org)                                         */



To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.BSF.3.96.980626173407.14997A-100000>