From owner-freebsd-net@FreeBSD.ORG Mon Dec 26 03:04:06 2005 Return-Path: X-Original-To: freebsd-net@FreeBSD.org Delivered-To: freebsd-net@FreeBSD.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 92EC216A41F; Mon, 26 Dec 2005 03:04:05 +0000 (GMT) (envelope-from bde@zeta.org.au) Received: from mailout2.pacific.net.au (mailout2.pacific.net.au [61.8.0.115]) by mx1.FreeBSD.org (Postfix) with ESMTP id C552843D49; Mon, 26 Dec 2005 03:04:04 +0000 (GMT) (envelope-from bde@zeta.org.au) Received: from mailproxy1.pacific.net.au (mailproxy1.pacific.net.au [61.8.0.86]) by mailout2.pacific.net.au (8.13.4/8.13.4/Debian-3) with ESMTP id jBQ33xrd018408; Mon, 26 Dec 2005 14:03:59 +1100 Received: from katana.zip.com.au (katana.zip.com.au [61.8.7.246]) by mailproxy1.pacific.net.au (8.13.4/8.13.4/Debian-3) with ESMTP id jBQ33uVY029680; Mon, 26 Dec 2005 14:03:56 +1100 Date: Mon, 26 Dec 2005 14:03:57 +1100 (EST) From: Bruce Evans X-X-Sender: bde@delplex.bde.org To: Andre Oppermann In-Reply-To: <43AD4540.472AC649@freebsd.org> Message-ID: <20051226133519.Y20444@delplex.bde.org> References: <1135377218.010275.56487.nullmailer@cicuta.babolo.ru> <43AC874E.1010208@elischer.org> <43AD4540.472AC649@freebsd.org> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: freebsd-net@FreeBSD.org, Matt Staroscik , Julian Elischer Subject: Re: Good gigabit NIC for 4.11? X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 26 Dec 2005 03:04:06 -0000 On Sat, 24 Dec 2005, Andre Oppermann wrote: > Julian Elischer wrote: >> >> "."@babolo.ru wrote: >> >>>> I've been Googling up a storm but I am having trouble finding >>>> recommendations for a good gigabit ethernet card to use with 4.11. The >>>> Intel part numbers I found in the em readme are a few years old now, and >>>> I can't quite determine how happy people are with other chipsets despite >>>> my searches. >>>> >>>> I'm looking for a basic PCI 1-port card with jumbo frame support if >>>> possible--I can live without it. Either way, stability is much more >>>> important than performance. >>>> >>>> >>> em for PCI32x33MHz works good up to 250Mbit/s, not more >>> em for PCI64x66MHz works up to about 500Mbit/s without polling > > Please specify the packet size (distribution) you've got these numbers > from. sk and bge for PCI 33MHz under my version of an old version of FreeBSD and significantly modified sk driver: - nfs with default packet size gives 15-30MB/s on a file system where local r/w gives 51-53MB/s. Strangely, tcp is best for writing (30MB/s vs 19 vor udp) and worst for reading (15MB/s vs 23). - sk to bge packet size 5 using ttcp -u: 1.1MB/s 240kpps (2% lost). Either ttcp or sk must be modified to avoid problems with ENOBUFS. - sk to bge packet size 1500 using ttcp -u: 78MB/s 53.4kpps (0% lost). - sk to bge packet size 8192 using ttcp -u: [panic]. Apparently I got bad bits from -current or mismerged them. - bge to sk packet size 5 using ttcp -u: 1.0MB/s 208kpps (0% lost). Different problems with ENOBUFS -- unmodified ttcp spins so test always takes 100% CPU. - bge to sk packet size 1500 using ttcp -u: [bge hangs] > You have to be careful here. Throughput and packets per second are not > directly related. Throughput is generally limited by good/bad hardware > and DMA speed. My measurements show that with decent hardware (em(4) and > bge(4) on PCI-X/133MHz) you can easily run at full wirespeed of 1 gigabit > per second with 1500 bytes per packet as the CPU only has to handle about > 81,000 packets per second. All processing like forwarding, firewalling and PCI/33MHz apparently can't do "only" 81000 non-small packets/sec. > routing table lookups are done once per packet no matter how large it is. > So at wirespeed with 64 bytes packets you've got to do this 1.488 million > times per second. This is a bit harder and entirely CPU bound. With some > mods and fastforward we've got em(4) to do 714,000 packets per second on > my Opteron 852 with PCI-X/133. Hacking em(4) to m_free() the packets just > before they would hit the stack I see that the hardware is capable of > receiving full wirespeed at 64 byte packets. I have timestamps which show that my sk (a Yukon-mumble, whatever is on an A7N8X-E) can't do more than the measured 240kpps. Once the ring buffer is filled up, it takes about 4 usec per packet (typically 1767 usec for 480 packets) to send the packets. I guess it spends the entire 4 usec talking to the PCI bus and perhaps takes several cycles setting up transactions. Bruce