From owner-freebsd-performance@FreeBSD.ORG Sat Jun 28 19:23:45 2003 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id F3F5B37B401 for ; Sat, 28 Jun 2003 19:23:44 -0700 (PDT) Received: from sabre.velocet.net (sabre.velocet.net [216.138.209.205]) by mx1.FreeBSD.org (Postfix) with ESMTP id 5267443FF5 for ; Sat, 28 Jun 2003 19:23:44 -0700 (PDT) (envelope-from dgilbert@velocet.ca) Received: from trooper.velocet.ca (trooper.velocet.net [216.138.242.2]) by sabre.velocet.net (Postfix) with ESMTP id 216D313875F; Sat, 28 Jun 2003 22:23:23 -0400 (EDT) Received: by trooper.velocet.ca (Postfix, from userid 66) id EEE2574C27; Sat, 28 Jun 2003 22:23:22 -0400 (EDT) Received: by canoe.velocet.net (Postfix, from userid 101) id E263C4AD1; Sat, 28 Jun 2003 22:23:17 -0400 (EDT) From: David Gilbert MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-ID: <16126.19861.842507.318997@canoe.velocet.net> Date: Sat, 28 Jun 2003 22:23:17 -0400 To: Craig Reyenga In-Reply-To: <000901c33dd1$12268780$0200000a@fireball> References: <20030628190036.0E06B37B405@hub.freebsd.org> <000f01c33dad$1595a0f0$e602a8c0@flatline> <16126.9805.829406.368426@canoe.velocet.net> <000901c33dd1$12268780$0200000a@fireball> X-Mailer: VM 7.14 under 21.4 (patch 12) "Portable Code" XEmacs Lucid cc: David Gilbert cc: freebsd-performance@freebsd.org Subject: Re: Tuning Gigabit X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 29 Jun 2003 02:23:45 -0000 >>>>> "Craig" == Craig Reyenga writes: >> 300 megabit is about where 32bit 33Mhz PCI maxes out. Craig> Could you tell me a little more about your tests? What boards, Craig> and what configuration? Well... first of all, a 33Mhz 32-bit PCI bus can transfer 33M * 32 bits ... which is just about 1 gigabit of _total_ PCI bus bandwidth. Consider that you're likely testing disk->RAM->NIC and you end up with 1/3 of that as throughput (minus bus overhead) so 300 megabit is a good number. There are many ways boards can get around this. Your IDE controller can be on a different bus. Your RAM can be on a different bus. If all three are on different busses, you might get closer to your gigabit of throughput. You can also speed up the bus ... PCI can run at 66 or 100 Mhz. PCI-X can run at 66, 100 or 133 Mhz. You can also make the bus wider ... many new chipsets support 64 bit slots. Now some boards I've tested (like the nvidia chipset) are strangely limited to 100megabit. I can't explain this. It seems low no matter how you cut it. Our testing has been threefold: 1) Generating packets. We test the machines ability to generate both large (1500, 3000 and 9000 byte) and small (64 byte) packets. The large scale generation of packets is necessary for the other tests. So far, some packet flood utilities from the linux hacker camp are our most efficient small packet generators. netcat on memory cached objects or on /dev/zero generate our big packets. 2) Passing packets. Primarily, we're interested in routing. Passing packets, passing packets with 100k routes and passing packets with 100's of ipf accounting rules are our benchmarks. We look at both small and large packet performance. Packet passing machines have at least two interfaces ... but sometimes 3 or 4 are tested. Polling is a major win in the small packet passing race. 3) Receiving packets. netcat is our friend again here. Receiving packets doesn't appear to be the same level of challenge as generating or passing them. At any rate, we're clearly not testing file delivery. We sometimes play with file delivery as a first test ... or for other testing reasons. We've found several boards that corrupt packets when they pass more than 100megabit of packets. We havn't explained that one yet. Our tests centre on routing packets (because that's what we do with our high performance FreeBSD boxes. All our other FreeBSD boxes "just work" at the level of performance they have). Although I would note that we do have some strange datapoints where we've revisited old problems. One of the most peculiar is the DEC tulip chipset 4 port cards. ... on these cards ... we have only been able to ever pass 100megabit _per card_ ... never per port. It would appear that the PCI bridge on these cards is imposing some form of limitation. We havn't tested under any other OSs than FreeBSD ... but the problem is definately perplexing. Dave. -- ============================================================================ |David Gilbert, Velocet Communications. | Two things can only be | |Mail: dgilbert@velocet.net | equal if and only if they | |http://daveg.ca | are precisely opposite. | =========================================================GLO================