From owner-freebsd-net@FreeBSD.ORG Fri Sep 17 11:05:01 2004 Return-Path: Delivered-To: freebsd-net@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 175C916A4D3 for ; Fri, 17 Sep 2004 11:05:01 +0000 (GMT) Received: from cmung3277.cmu.carnet.hr (cmung3277.cmu.carnet.hr [193.198.140.229]) by mx1.FreeBSD.org (Postfix) with ESMTP id 1999643D2D for ; Fri, 17 Sep 2004 11:05:00 +0000 (GMT) (envelope-from zec@icir.org) Received: from [127.0.0.1] (localhost [127.0.0.1]) by tpx30 (8.12.11/8.12.11) with ESMTP id i8HB2UcV000378; Fri, 17 Sep 2004 13:02:30 +0200 (CEST) (envelope-from zec@icir.org) From: Marko Zec To: freebsd-net@freebsd.org, donatas Date: Fri, 17 Sep 2004 13:02:30 +0200 User-Agent: KMail/1.6.2 References: <030a01c49c9f$7c215970$f2f109d9@donatas> In-Reply-To: <030a01c49c9f$7c215970$f2f109d9@donatas> MIME-Version: 1.0 Content-Disposition: inline Content-Type: text/plain; charset="iso-8859-4" Content-Transfer-Encoding: 7bit Message-Id: <200409171302.30120.zec@icir.org> Subject: Re: ng_one2many - very slow X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Sep 2004 11:05:01 -0000 On Friday 17 September 2004 12:17, donatas wrote: > Hello, > > we need a 400Mbit link between two intel machines (Xeon 2.4, Raid, > 512DDr, 2 ports-em(1000Mbit),2 ports-fxp(100Mbit)) > > .... > > truth, we've tested direct link between em adapters in gigabit mode > and using TCP packets 850Mbit throughput was achieved. And Nearly > 1Gbit with UDP packets. > > as you see one2many test results aren't even close to 400Mbit > Is it possible that em and fxp cannot work together or something. > what can you suggest to solve this small problem? Perhaps TCP packets are arriving out of order for some reason (interrupt coalescing etc.) which can be _bad_ for TCP throughput. What kind of CPU load are you observing on those machines, when testing a single Gbit link versus a 4*100M bundle? Marko