Date: Thu, 25 Jun 1998 22:26:23 -0700 From: David Greenman <dg@root.com> To: Chris Dillon <cdillon@wolves.k12.mo.us> Cc: hackers@FreeBSD.ORG Subject: Re: Will 8 Intel EtherExpress PRO 10/100's be a problem? Message-ID: <199806260526.WAA06599@implode.root.com> In-Reply-To: Your message of "Thu, 25 Jun 1998 23:32:35 CDT." <Pine.BSF.3.96.980625222746.12068B-100000@duey.hs.wolves.k12.mo.us>
next in thread | previous in thread | raw e-mail | index | archive | help
>Basically my questions are: > >1) Will there be any problems with using three or more host-to-PCI >bridges? > >2) Will there be any problems using up to 8 Intel Etherexpress Pro >10/100's? If so, can I use a combination of those and some DEC >21[0,1]4[0,1] cards? It should work, but I don't know that anyone has actually tried this. >3) If i ever end up using natd for all of this, would there be any >problems with it servicing those 7 networks (probably max 100 hosts per >network)? Don't know the answer to that one. >I initially thought of just getting a nice ATX rackmount case and a nice >ASUS motherboard and using some of those ZNYX 4-port fast-ethernet cards. >Several reasons why I like the above idea better is because the support >for the Intel cards is apparently better, and replacing bad NICs would be >simple and inexpensive. If I DO end up going the ZNYX route, are there >any known problems with those 4-port cards? I'd need two of them, of >course, and the motherboard would most likely have an Intel card built >onto it also. Maybe I'll even eventually throw an ETInc sync serial card >in there for my T1 and use our Cisco 2514 elsewhere. Considering all of the compatibility problems with the 'de' driver, I'd stick with the Intels. -DG David Greenman Co-founder/Principal Architect, The FreeBSD Project To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-hackers" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199806260526.WAA06599>