From owner-freebsd-net Tue May 1 9:44:25 2001 Delivered-To: freebsd-net@freebsd.org Received: from mailin7.bigpond.com (juicer38.bigpond.com [139.134.6.95]) by hub.freebsd.org (Postfix) with ESMTP id 0DC1B37B422 for ; Tue, 1 May 2001 09:44:21 -0700 (PDT) (envelope-from sldwyer@bigpond.com) Received: from bigpond.com ([139.134.4.54]) by mailin7.bigpond.com (Netscape Messaging Server 4.15) with SMTP id GCO1DM00.20S for ; Wed, 2 May 2001 02:48:58 +1000 Received: from WEBH-T-005-p-152-166.tmns.net.au ([203.54.152.166]) by mail6.bigpond.com (Claudes-Thoughtful-MailRouter V2.9c 11/4589624); 02 May 2001 02:43:47 Message-ID: <3AEEE8A0.EE9B2651@bigpond.com> Date: Wed, 02 May 2001 00:47:29 +0800 From: Shaun Dwyer X-Mailer: Mozilla 4.76 [en] (X11; U; Linux 2.2.12 i386) X-Accept-Language: en MIME-Version: 1.0 To: freebsd-net@freebsd.org Subject: bridging and link bonding with netgraph Content-Type: text/plain; charset=iso-8859-15 Content-Transfer-Encoding: 7bit Sender: owner-freebsd-net@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.org Hi all, As I regularly attend lan parties with plenty of people, I thought that i'd give ng_bridge a go as opposed to the kernel bridging. The test box was a P200 with 64MB of ram, network cards were: 2x Intel EtherExpress Pro/100B 1x SMC card with a DC21040 chipset 1x SMC card with a DEC/Intel 21143 chipset 2x SMC 80XX cards (isa) It was running 4.2-stable (as of about two months ago). Bridging worked fine, ie, i could ping from any host to any other host, regardless of the segment it was on, however, when everyone tried to play games on my dedicated server (seperate box), ping times went to hell. We were seeing at best 50ms, and at worst 500ms. The pings were not constant either, they went up and down all the time. During this, we were only using 3 of the network cards (the 2 intels ,and the DC21040 based card). I eventually took the bridge away, and just connected everything up via a hub. ping times were as normal. For configuring netgraph for bridging, i used the example script in /usr/share/examples/netgraph/ether.bridge . Is this to be expected when using netgraph for bridging? or is it because of some piece of hardware i was using, or something else? I dont think pocessor power was an issue, because it only peaked at 20% (from systat -vmstat). Also, ive been hearing about the link bonding with netgraph using ng_one2many. I was thinking of doing this at the next large lan I'm going to. I have 3 intel ether express Pro/100B cards in the box. I was wondering if latency would become an issue as well, or if it should perform flawlessly? The box I am thinking of using is a 417Mhz celeron with 128MB of ram. I'll be using it as a game server, as well as an FTP+samba dump site. Any suggestions for fixes, or predictions on what to expect will be much appriciated. Thanks, Shaun -- ---------------------- Shaun Dwyer sldwyer@bigpond.com ---------------------- To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-net" in the body of the message