Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 19 Jul 1999 01:39:53 -0700 (PDT)
From:      Vincent Poy <vince@venus.GAIANET.NET>
To:        Reinier Bezuidenhout <rbezuide@oskar.nanoteq.co.za>
Cc:        jmb@hub.FreeBSD.ORG, sthaug@nethelp.no, tim@storm.digital-rain.com, freebsd-hackers@FreeBSD.ORG
Subject:   Re: poor ethernet performance?
Message-ID:  <Pine.BSF.4.05.9907190135380.331-100000@venus.GAIANET.NET>
In-Reply-To: <199907190816.KAA22986@oskar.nanoteq.co.za>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, 19 Jul 1999, Reinier Bezuidenhout wrote:

> Hi ...
> 
> We have previously done many network performance tests for our 
> products running on FreeBSD ... 
> 
> We have found that when ever there is disk accessing involved, it
> is not a good idea to look at the transfer figures.  We did tests
> with ftp and is was slow (compared to only memory generated data 
> e.g. ttcp)

	yeah, I guess all tests should be done without requiring the use
of the disk.

> 1. If you want to test the network speed ... use ttcp or something
>    that generates the data and doesn't read it from disk.

	ttcp works.  The only problem is when I tried it in both
directions, at once.  the total becomes 11.xMbytes/sec total as opposed to
9.4Mbytes/sec when doing it in one direction only.

> 2. When doing full-duplex and using fxp cards, stay away from X-over
>    cables ... use a full-duplex switch etc. ... the fxp cards are not
>    made to work with X-over cables (as far as I know - ala Intel spec)
>    note ... only for full-duplex tests.

	Does anyone actually use X-over cables for 100Mbps Full Duplex
since 3Com said CrossOver cables are not rated for 100Mbps or something
even though it's Cat5.  Actually, in the older Intel docs for the Pro100B,
it does say to connect using X-over cable to the switch.

> We have done tests in full-duplex with non Intel cards (because we did
> not have a switch at that time :)) and with max size packets we got around
> 188.00 Mbps using the de0 driver.

	Pretty interesting.  How did you do the full duplex tests?


Cheers,
Vince - vince@MCESTATE.COM - vince@GAIANET.NET           ________   __ ____ 
Unix Networking Operations - FreeBSD-Real Unix for Free / / / / |  / |[__  ]
GaiaNet Corporation - M & C Estate                     / / / /  | /  | __] ]  
Beverly Hills, California USA 90210                   / / / / / |/ / | __] ]
HongKong Stars/Gravis UltraSound Mailing Lists Admin /_/_/_/_/|___/|_|[____]

> > On Sun, 18 Jul 1999, Jonathan M. Bresler wrote:
> > 
> > > > 	I guess I forgot about the overhead.  I've tested between two
> > > > FreeBSD machines using Intel Pro100+ NIC cards connected to a Cisco 2924XL
> > > > Switch Full Duplex and never seen anything close to the speeds.
> > > 
> > > 	using netperfv2pl3 and FreeBSD 2.2.8 on 300MHz PII with fxp
> > > cards (all from memory), i routinely get TCP_STREAM to pushd 94Mbps.
> > > 
> > > 	i use these machines for stressing every else we have at work.
> > 
> > 	Hmmm, has anyone tried a full duplex test before?  Since it seems
> > like the bottleneck is really the speed of the disks..



To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.BSF.4.05.9907190135380.331-100000>