From owner-freebsd-performance@FreeBSD.ORG Tue Jul 22 09:49:12 2003 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 5B57637B40B for ; Tue, 22 Jul 2003 09:49:12 -0700 (PDT) Received: from svaha.com (svaha.com [64.46.156.67]) by mx1.FreeBSD.org (Postfix) with ESMTP id 5E93043F75 for ; Tue, 22 Jul 2003 09:49:11 -0700 (PDT) (envelope-from meconlen@obfuscated.net) Received: from obfuscated.net (internal.neutelligent.com [64.156.25.4]) (AUTH: LOGIN meconlen, TLS: TLSv1/SSLv3,256bits,AES256-SHA) by svaha.com with esmtp; Tue, 22 Jul 2003 12:49:10 -0400 Message-ID: <3F1D6B04.4010704@obfuscated.net> Date: Tue, 22 Jul 2003 12:49:08 -0400 From: Michael Conlen User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.4) Gecko/20030624 Netscape/7.1 (ax) X-Accept-Language: en-us, en MIME-Version: 1.0 To: freebsd-performance@freebsd.org Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: sbwait state for loaded Apache server X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Jul 2003 16:49:12 -0000 I'm working with an Apache webserver running 1400 apache processes and the system pusing somewhere in the area of 50-60Mbit/sec sustained. The system seems to top out around 60Mbit/sec and I see some minor degradation of server response times. The server response times are generally very very stable otherwise. Most of the apache processes are in the sbwait state. I've got 4Gig of memory, so I can play with some of the values (nmbclusters has been turned up and I never see delayed or dropped requests for mbufs). I don't see in my old Design & Implementation of 4.4BSD (Red Book?) much about the state, and I don't a copy of TCP/IP Illustrated 2 handy these days, but if memory serves sbwait is waiting on a socket buffer resource. My guess is that these are processes waiting on the send buffer to drain. $ netstat -an | egrep '[0-9] 3[0-9]{4}' | wc -l 297 seems to indicate that I've got a lot of processes waiting to drain. Looking at the actual output it shows most of these are ESTABLISHED. So my thought is by increasing the send queue size I could reduce this. I've got a pretty good idea on the size of the files being sent and my thoughts were to increase the send-q size to where Apache can write() the file and go to the keep alive state quickly instead of waiting. So the questions are Would this affect actual network performance Would this reduce load on the machine (a handy thing to do, but secondary) given c = number of connections and q = queue adjustment and s = size of mbuf do I just need to make sure I have (c*q)/s buffers available, and any fudge? How do I know when I need to increase the overall system buffer size beyond 200 MB? -- Michael Conlen