From owner-freebsd-stable@FreeBSD.ORG Tue Apr 18 14:14:28 2006 Return-Path: X-Original-To: freebsd-stable@freebsd.org Delivered-To: freebsd-stable@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 3529B16A402 for ; Tue, 18 Apr 2006 14:14:28 +0000 (UTC) (envelope-from nvass@teledomenet.gr) Received: from matrix.teledomenet.gr (dns1.teledomenet.gr [213.142.128.1]) by mx1.FreeBSD.org (Postfix) with ESMTP id 9221343D48 for ; Tue, 18 Apr 2006 14:14:26 +0000 (GMT) (envelope-from nvass@teledomenet.gr) Received: from [192.168.1.71] ([192.168.1.71]) by matrix.teledomenet.gr (8.12.10/8.12.10) with ESMTP id k3IEEOdP015741; Tue, 18 Apr 2006 17:14:24 +0300 From: Nikos Vassiliadis To: freebsd-stable@freebsd.org, Stephen.Clark@seclark.us Date: Tue, 18 Apr 2006 17:12:41 +0300 User-Agent: KMail/1.9.1 References: <4444EE93.9050003@seclark.us> In-Reply-To: <4444EE93.9050003@seclark.us> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-7" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200604181712.42239.nvass@teledomenet.gr> Cc: Subject: Re: FreeBSD 4.9 losing mbufs!!! X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Apr 2006 14:14:28 -0000 On Tuesday 18 April 2006 16:50, Stephen Clark wrote: > Hello List, > > I know 4.9 is ancient history, but unfortunately we have several > thousand sites installed. We are in the process of moving to 6.1 when it > is released. > > Right now I have an immediate problem where we are going to install two > system so these are new systems, yet you are going to use 4.9. Why? > at a HQ site. Each of the 2 systems will have two gre/vpn/ospf tunnels to a > 100 remote sites in the > field. The broadband will be a T3 with failover to dialup actiontec > dualpc modems. We want > to use FreeBSD systems rather than put in Cisco equip which is what we > have done for other > large customers. > > The problem: > > I have been testing between an Athlon 64 3000+ (client) and an Athlon > 64 X2 4800+ (server) across a dedicated 100mb lan. When I use nttcp, > which is a round trip tcp test, across the gre/vpn the client system, > (which goes to 0 percent idle), network stack will eventually stop > responding. In trying to track this down I find that > net.inet.ip.intr_queue_maxlen which is normally 50 has been reached (I > added a sysctl to be able to look at it), but it never drains down. If I > increase it things start working again. If I continue to hammer the > client I see the > intr_queue_maxlen continue to grow until it again reaches the new > maximum. Another datapoint if I don't send the data thru the gre tunnel, > but only thru the vpn I don't see this problem. > > I've looked at the gre code til I am blue in the face and can't see > where mbufs were not being freed when the quelen is full. > > If anybody could give some direction as where to look or how to better > trouble shoot this problem it would be greatly appreciated. > > Thanks for being such a great list, > Steve