From owner-freebsd-net@FreeBSD.ORG Mon Jul 4 19:15:00 2011 Return-Path: Delivered-To: freebsd-net@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id DEC21106566C for ; Mon, 4 Jul 2011 19:15:00 +0000 (UTC) (envelope-from eugen@eg.sd.rdtc.ru) Received: from eg.sd.rdtc.ru (unknown [IPv6:2a03:3100:c:13::5]) by mx1.freebsd.org (Postfix) with ESMTP id 4E06D8FC15 for ; Mon, 4 Jul 2011 19:14:59 +0000 (UTC) Received: from eg.sd.rdtc.ru (localhost [127.0.0.1]) by eg.sd.rdtc.ru (8.14.4/8.14.4) with ESMTP id p64JEvNs013005; Tue, 5 Jul 2011 02:14:57 +0700 (NOVST) (envelope-from eugen@eg.sd.rdtc.ru) Received: (from eugen@localhost) by eg.sd.rdtc.ru (8.14.4/8.14.4/Submit) id p64JEpgK013004; Tue, 5 Jul 2011 02:14:51 +0700 (NOVST) (envelope-from eugen) Date: Tue, 5 Jul 2011 02:14:51 +0700 From: Eugene Grosbein To: Adrian Minta Message-ID: <20110704191451.GA12372@rdtc.ru> References: <813678a855c90c49bf66c7084f88b45d.squirrel@mail.stsnet.ro> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <813678a855c90c49bf66c7084f88b45d.squirrel@mail.stsnet.ro> User-Agent: Mutt/1.4.2.3i Cc: freebsd-net@freebsd.org Subject: Re: FreeBSD 8.2 and MPD5 stability issues - update X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 04 Jul 2011 19:15:01 -0000 On Mon, Jul 04, 2011 at 08:16:19PM +0300, Adrian Minta wrote: > >It seems, enough. But, are you sure your L2TP client will wait > >for overloaded daemon to complete connection? The change will > >proportionally increase responsiveness of mpd - it has not enough CPU > >horsepower to process requests timely. > > > >Eugene Grosbein > > Actually something else is happening. > > I increased the queue in msg.c > #define MSG_QUEUE_LEN 65536 You can't do this blindly, without other changes. For example, there is MSG_QUEUE_MASK in the next line that must be equal to MSG_QUEUE_LEN-1 and effectively limits usage of this queue. > ... and in the ppp.h: > #define SETOVERLOAD(q) do { \ > int t = (q); \ > if (t > 600) { \ > gOverload = 100; \ > } else if (t > 100) { \ > gOverload = (t - 100) * 2; \ > } else { \ > gOverload = 0; \ > } \ > } while (0) > > Now the overload message is very rare, but the behaviour is the same. > Around 5500 sessions the number don't grow anymore, but instead begin to > decrease. You should study why existing connections break, do clients disconnect themselves or server disconnect them? You'll need turn off detailed logs, read mpd's documentation. Also, there are system-wide queues for NETGRAPH messages that can overflow and that's bad thing. Check them out with command: vmstat -z | egrep 'ITEM|NetGraph' FAILURES column shows how many times NETGRAPH queues have been overflowed. One may increase their LIMIT (second column in vmstat's output) with /boot/loader.conf: net.graph.maxdata=65536 net.graph.maxalloc=65536 Eugene Grosbein