From owner-freebsd-current@FreeBSD.ORG Mon Oct 11 08:54:11 2004 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 451D316A4CE for ; Mon, 11 Oct 2004 08:54:11 +0000 (GMT) Received: from fledge.watson.org (fledge.watson.org [204.156.12.50]) by mx1.FreeBSD.org (Postfix) with ESMTP id D054C43D2F for ; Mon, 11 Oct 2004 08:54:10 +0000 (GMT) (envelope-from robert@fledge.watson.org) Received: from fledge.watson.org (localhost [127.0.0.1]) by fledge.watson.org (8.13.1/8.13.1) with ESMTP id i9B8qZui040160; Mon, 11 Oct 2004 04:52:35 -0400 (EDT) (envelope-from robert@fledge.watson.org) Received: from localhost (robert@localhost)i9B8qUd4040157; Mon, 11 Oct 2004 04:52:31 -0400 (EDT) (envelope-from robert@fledge.watson.org) Date: Mon, 11 Oct 2004 04:52:30 -0400 (EDT) From: Robert Watson X-Sender: robert@fledge.watson.org To: Sean McNeil In-Reply-To: <1097476651.36729.9.camel@server> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII cc: freebsd-current@freebsd.org Subject: Re: 20% packet loss with re0 in gigE mode vs. 0% in 100BT X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 11 Oct 2004 08:54:11 -0000 On Sun, 10 Oct 2004, Sean McNeil wrote: > I do not think this is a very acceptable performance, but I have little > knowledge of ethernet drivers on FreeBSD to identify issues. I'd say > any packet loss in the configuration I was testing would be > unacceptable. Where to begin? I'll do what I can to help fix this. > > The tests were with udp packets of size 1316 on a quiet network. As > explained in previous emails the only network combination that has loss > is when the re0 is in gigE mode under FreeBSD. Whether or not you can send at 1gps on gigabit ethernet depends quite a bit on traffic profile and hardware. For example, to accomplish high packet rates for small packet sizes, you need to make sure you have a 64-bit PCI ethernet card. The above is a pretty large packet size, so I would guess that even with poor hardware, you should be able to readily get 500mbps transmission rates. I'm not sure how much control you have over the application you're running, but it would be interesting to know if it's getting back ENOBUFS from send() to the network interface or not. This would tell us if the output buffer for the network interface is filling or not. If you're not filling it at 100mbs, the chances are you're not filling it at 1gbps, but it's worth checking. Assuming you're not filling the send buffer, it would definitely suggest a driver, configuration, or hardware bug. There have recently been a number of changes to the if_re driver to fix support for jumbo frames, etc. It would be interesting to know whether backing out to earlier revisions of the if_re driver affect the problem you're seeing. In particular, ifre.c:1.35 was the jumbo frame change, so 1.34 would be interesting, and 1.31 is before some other related changes. Likewise, you could try backing out to before locking was introduced by setting debug.mpsafenet=0 in loader.conf and then backing out to if_re.c:1.29. I might be generally useful to try setting debug.mpsafenet=0 with the current driver to eliminate that as a possible concern. Robert N M Watson FreeBSD Core Team, TrustedBSD Projects robert@fledge.watson.org Principal Research Scientist, McAfee Research