Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 23 Apr 2002 17:32:07 -0400
From:      "Fengrui Gu" <gfr@lucent.com>
To:        <gfr@lucent.com>, <freebsd-hackers@freebsd.org>, <freebsd-net@freebsd.org>
Subject:   RE: Problems with nge driver and copper GbE cards
Message-ID:  <FCENIMJAHNNHNOPKCEFEOEPACKAA.gfr@lucent.com>
In-Reply-To: <FCENIMJAHNNHNOPKCEFEIEOOCKAA.gfr@lucent.com>

next in thread | previous in thread | raw e-mail | index | archive | help
There is something interesting. I accidentally started a
ping command(ping data sender side) from data receiver side.
As you know, ping will continue running until you stop it.

I started netperf again from data sender side. You know what?
The link seems more stable with additional ping session on
receiver side no matter TCP or UDP traffic. I got the following
numbers by running netperf.

         The performance of TCP
                w/ jumbo frame
Copper GbE         650Mb/s
(SMC9462TX)
Fiber GbE          660Mb/s
(GA 620)
(Note: my experience told me that enable/disable jumbo frame
will cause 20% performance difference).

--Fengrui


-----Original Message-----
From: Fengrui Gu [mailto:gfr@lucent.com]
Sent: Tuesday, April 23, 2002 3:33 PM
To: freebsd-hackers@freebsd.org; freebsd-net@freebsd.org
Cc: gfr@lucent.com
Subject: Problems with nge driver and copper GbE cards


I am evaluating copper GbE cards for our lab.
According to previous talk threads, it seems that SMC9462TX has better
performance than NetGear cards. I bought two SMC9462TX cards
and connect them through a Cat 5e cross-link cable. The machines in use
are two dual PIII 733Mhz with 756MB memory. I use 32bit/33Mhz PCI slots.
I know PCI bus of 32bit/33Mhz is slow but the major purpose of evaluation
is to compare the performance between copper and Fiber GbE cards. There are
another two identical dual PIII 733Mhz machines installed NetGear 620 Fiber
GbE cards(using ti driver). So it is not an issue.
I am using FreeBSD 4.5 and statically link nge driver into the kernel.

First, there are a lot of "link up" messages from nge driver.
I guess this has been reported. I use a small program to measure TCP
performance.
Basically, it sends 1G or 2G data over the network and calculate time and
bit rate.
The link between two copper GbE cards became unavailable after some TCP
runs.
There is no response from "ping". The kernel didn't report error messages
except
 some new "link up" messages. There is no abnormal from the output of
"ifconfig -a".
Based on some suggestions(master or slave mode), I issued command "ifconfig
nge0 link0".
The link would be back sometimes but not always. Did anyone suffer the same
problem?

Second, it seems that the link is more stable with UDP traffic though it
became
unavailable now and then. I could manage to collect some UPD performance
data:

               UDP performance(sender)
           w/o jumbo frame       w/ jumbo frame
copper GbE       510Mb/s         632Mb/s
(SMC9462Tx)
Fiber GbE        750Mb/s         856Mb/s
(GA 620)
(Note: data are from the results of netperf and shared the same parameters).
Did anybody knows why copper GbE cards are so slow? Both copper and Fiber
GbE cards are in the same kind of PCI slot and identical machines. The extra
memory on GA620 should not cause 40-50% difference in my opinion.

Third, I had trouble to set half-duplex mode on nge0.
If I issued command "ifconfig nge0 media 1000baseTX mediaopt half-duplex", I
got the
following error message
"ifconfig: SIOCSIFMEDIA: Device not configured"
I don't have trouble to issue command "ifconfig nge0 media 1000baseTX
mediaopt full-duplex".

thanks,
--Fengrui










To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-net" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?FCENIMJAHNNHNOPKCEFEOEPACKAA.gfr>