From owner-freebsd-hackers Mon Jul 19 1:42:33 1999 Delivered-To: freebsd-hackers@freebsd.org Received: from venus.GAIANET.NET (venus.GAIANET.NET [207.211.200.51]) by hub.freebsd.org (Postfix) with ESMTP id B120914C7F; Mon, 19 Jul 1999 01:42:26 -0700 (PDT) (envelope-from vince@venus.GAIANET.NET) Received: from localhost (vince@localhost) by venus.GAIANET.NET (8.9.3/8.9.3) with ESMTP id BAA27194; Mon, 19 Jul 1999 01:39:53 -0700 (PDT) (envelope-from vince@venus.GAIANET.NET) Date: Mon, 19 Jul 1999 01:39:53 -0700 (PDT) From: Vincent Poy To: Reinier Bezuidenhout Cc: jmb@hub.FreeBSD.ORG, sthaug@nethelp.no, tim@storm.digital-rain.com, freebsd-hackers@FreeBSD.ORG Subject: Re: poor ethernet performance? In-Reply-To: <199907190816.KAA22986@oskar.nanoteq.co.za> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-freebsd-hackers@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.ORG On Mon, 19 Jul 1999, Reinier Bezuidenhout wrote: > Hi ... > > We have previously done many network performance tests for our > products running on FreeBSD ... > > We have found that when ever there is disk accessing involved, it > is not a good idea to look at the transfer figures. We did tests > with ftp and is was slow (compared to only memory generated data > e.g. ttcp) yeah, I guess all tests should be done without requiring the use of the disk. > 1. If you want to test the network speed ... use ttcp or something > that generates the data and doesn't read it from disk. ttcp works. The only problem is when I tried it in both directions, at once. the total becomes 11.xMbytes/sec total as opposed to 9.4Mbytes/sec when doing it in one direction only. > 2. When doing full-duplex and using fxp cards, stay away from X-over > cables ... use a full-duplex switch etc. ... the fxp cards are not > made to work with X-over cables (as far as I know - ala Intel spec) > note ... only for full-duplex tests. Does anyone actually use X-over cables for 100Mbps Full Duplex since 3Com said CrossOver cables are not rated for 100Mbps or something even though it's Cat5. Actually, in the older Intel docs for the Pro100B, it does say to connect using X-over cable to the switch. > We have done tests in full-duplex with non Intel cards (because we did > not have a switch at that time :)) and with max size packets we got around > 188.00 Mbps using the de0 driver. Pretty interesting. How did you do the full duplex tests? Cheers, Vince - vince@MCESTATE.COM - vince@GAIANET.NET ________ __ ____ Unix Networking Operations - FreeBSD-Real Unix for Free / / / / | / |[__ ] GaiaNet Corporation - M & C Estate / / / / | / | __] ] Beverly Hills, California USA 90210 / / / / / |/ / | __] ] HongKong Stars/Gravis UltraSound Mailing Lists Admin /_/_/_/_/|___/|_|[____] > > On Sun, 18 Jul 1999, Jonathan M. Bresler wrote: > > > > > > I guess I forgot about the overhead. I've tested between two > > > > FreeBSD machines using Intel Pro100+ NIC cards connected to a Cisco 2924XL > > > > Switch Full Duplex and never seen anything close to the speeds. > > > > > > using netperfv2pl3 and FreeBSD 2.2.8 on 300MHz PII with fxp > > > cards (all from memory), i routinely get TCP_STREAM to pushd 94Mbps. > > > > > > i use these machines for stressing every else we have at work. > > > > Hmmm, has anyone tried a full duplex test before? Since it seems > > like the bottleneck is really the speed of the disks.. To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-hackers" in the body of the message