Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 6 Nov 2001 11:59:53 -0500 (EST)
From:      "Brandon D. Valentine" <bandix@looksharp.net>
To:        "Jeroen C. van Gelderen" <jeroen@vangelderen.org>
Cc:        Marcel Prisi <marcel-lists@virtua.ch>, <stable@FreeBSD.ORG>
Subject:   Re: What NIC to choose ?
Message-ID:  <20011106114853.C42904-100000@turtle.looksharp.net>
In-Reply-To: <3BE80E27.3080707@vangelderen.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, 6 Nov 2001, Jeroen C. van Gelderen wrote:

>It would be interesting to know which clone chipset was giving
>you trouble. It seems unfair to declare all clone chipsets to
>be unreliable unless you have had a bad experience with each
>one of them. (The converse it true also, that is why I indicated
>the exact make of the tulip clone that I have not had trouble
>with.)

Indeed that would be unfair.  Here is one particular model that I have
found to suck.  =)

Ethernet controller: LiteOn LNE100TX (rev 32).

I have both Kingston and Linksys branded examples of this crappy
non-standard tulip clone here.  They all report pretty much the above
line in /proc/pci under linux.

>Then there is the issue of the quality of the clone card itself
>which -when improperly engineered- may cause failures that have
>nothing to do with the chipset, no?

That can be true, yes.  But in this case I've seen it across multiple
vendors' implementations.

>My experience with the Linksys LNE100TX 4.1 (ADMtek chipset)
>has been nothing but positive, despite the fact that it is
>a tulip clone. Which were the exact types of cards that you
>had fail?

Exactly the same card.  You may have had positive experiences with 'em,
but I have seen too many keel over dead.

>As for the Intel EtherExpress, my previous post was not so
>positive. I noted reproducible timing-related errors when they
>were depoyed in quality 2U riser cards. No other card (3Com,
>LinkSys) had this problem. Using the Intel PRO 10/100 cards
>without risers gave no problems but they still have a worse
>price/performance ratio for my particular setup.

Interestingly enough, the beowulf cluster they were used in contains 2U
cases w/ risers.  The risers are pretty crappy and they have caused a
lot of problems in their own right.  However, those seem to have been
resolved for a while now and NICs just keep keeling over.  All further
system purchases are 1Us with onboard NICs and much more reliable 64-bit
PCI risers for Myrinet.  The Intel eepros in that cluster's
infrastructure nodes never have a problem though, and they pass more
data than any interface in the cluster.

-- 
"Never put off until tomorrow what you can do today.  There might be a
law against it by that time."	-- /usr/games/fortune, 07/30/2001

Brandon D. Valentine <bandix at looksharp.net>


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-stable" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20011106114853.C42904-100000>