From owner-freebsd-questions@FreeBSD.ORG Wed Apr 23 09:00:29 2008 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id BB2DE106564A for ; Wed, 23 Apr 2008 09:00:29 +0000 (UTC) (envelope-from mk@adminlife.net) Received: from mx.adminlife.net (mx.adminlife.net [85.214.17.98]) by mx1.freebsd.org (Postfix) with ESMTP id 86ABF8FC12 for ; Wed, 23 Apr 2008 09:00:28 +0000 (UTC) (envelope-from mk@adminlife.net) Received: from [192.168.0.51] (p548817C7.dip0.t-ipconnect.de [84.136.23.199]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: matthias@adminlife.net) by mx.adminlife.net (Postfix) with ESMTPSA id 42FB0125409 for ; Wed, 23 Apr 2008 10:41:30 +0200 (CEST) Message-ID: <480EF63E.4050002@adminlife.net> Date: Wed, 23 Apr 2008 10:41:34 +0200 From: Matthias Kellermann User-Agent: Thunderbird 2.0.0.12 (X11/20080227) MIME-Version: 1.0 To: freebsd-questions@freebsd.org X-Enigmail-Version: 0.95.6 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Subject: kern.ipc.maxsockets and FIN_WAIT_2: No buffer space available X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Apr 2008 09:00:29 -0000 Hi list, I've got some problems with full sockets on one FreeBSD 6.2 system acting as a loadbalancer for a webfarm. Sometimes I get some errors like these from different daemons: haproxy[46932]: Proxy my_proxy reached system memory limit at 83 sockets. Please check system tunables. stunnel: LOG3[45738:139512832]: remote socket: No buffer space available (55) netstat -m looks fine: 491/874/1365 mbufs in use (current/cache/total) 450/618/1068/25600 mbuf clusters in use (current/cache/total/max) 450/490 mbuf+clusters out of packet secondary zone in use (current/cache) 0/0/0/0 4k (page size) jumbo clusters in use (current/cache/total/max) 0/0/0/0 9k jumbo clusters in use (current/cache/total/max) 0/0/0/0 16k jumbo clusters in use (current/cache/total/max) 1022K/1454K/2477K bytes allocated to network (current/cache/total) 0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters) 0/0/0 requests for jumbo clusters denied (4k/9k/16k) 0/8/6656 sfbufs in use (current/peak/max) 0 requests for sfbufs denied 0 requests for sfbufs delayed 0 requests for I/O initiated by sendfile 7696 calls to protocol drain routines But this looks bad: # sysctl kern.ipc.numopensockets kern.ipc.numopensockets: 11301 # sysctl kern.ipc.maxsockets kern.ipc.maxsockets: 12328 After raising kern.ipc.maxsockets up to 16384 the errors disappeared, for now. Some further research gave me the following result: # netstat -n | grep -c FIN_WAIT_2 11156 Hmm, strange. All the connections go to (Debian Linux)-HTTP-Nodes. But I don't know why the connections don't close. On the Debian Linux system there are lots of sockets in LAST_ACK state. Any ideas what could cause these problems and how I could solve them? Can I set a timeout for the FIN_WAIT_2 state on the FreeBSD system, so the sockets won't fill up with unused connections waiting for termination? I also looked at all tcp4 sockets in netstat -n output. The number of these sockets is higher than kern.ipc.numopensockets at the same time. I think the number should be lower than kern.ipc.numopensockets because all tcp4 sockets are only a part of all sockets, right? Thanks, Matthias