From owner-freebsd-performance@FreeBSD.ORG Thu Jul 17 02:09:53 2008 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A307F1065675 for ; Thu, 17 Jul 2008 02:09:53 +0000 (UTC) (envelope-from astrange@ithinksw.com) Received: from fmailhost05.isp.att.net (fmailhost05.isp.att.net [204.127.217.105]) by mx1.freebsd.org (Postfix) with ESMTP id 8BA598FC19 for ; Thu, 17 Jul 2008 02:09:53 +0000 (UTC) (envelope-from astrange@ithinksw.com) Received: from [10.0.1.4] (adsl-232-6-149.asm.bellsouth.net[74.232.6.149]) by isp.att.net (frfwmhc05) with SMTP id <20080717015622H0500efs8ge>; Thu, 17 Jul 2008 01:56:24 +0000 X-Originating-IP: [74.232.6.149] Message-Id: From: Alexander Strange To: freebsd-performance@freebsd.org Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes Content-Transfer-Encoding: 7bit Mime-Version: 1.0 (Apple Message framework v926) Date: Wed, 16 Jul 2008 21:56:18 -0400 X-Mailer: Apple Mail (2.926) Subject: Large number of http connections immediately dropped X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 17 Jul 2008 02:09:53 -0000 We're running a rather high-load webserver using FreeBSD 7-RELEASE/ amd64/nginx on an Intel em gigabit connection. Performance is good for our current bandwidth use (about 20Mbit and ~2000 connections/sec at the moment), but a large number of HTTP requests are being immediately dropped before getting to nginx. I see complaints about this with earlier versions of FreeBSD - http://forum.lighttpd.net/topic/171 - but no solutions. Does anyone know what could be the problem, or anything we could do about it? There are several other servers running earlier FreeBSDs on i386 which don't seem to have this problem, but I still haven't ruled out upstream hardware problems or Sandvine yet. On the server: -nginx's error log is full of "accept() failed (53: Software caused connection abort)", sometimes printing three or four at the same time. -messages is full of: Limiting open port RST response from 441 to 200 packets/sec Limiting open port RST response from 488 to 200 packets/sec Limiting open port RST response from 399 to 200 packets/sec Limiting open port RST response from 434 to 200 packets/sec Limiting open port RST response from 308 to 200 packets/sec I'm not sure if that's related or not. -sysctl.conf: net.inet.tcp.tso=1 kern.ipc.somaxconn=10240 kern.ipc.nmbclusters=65536 net.inet.tcp.sendspace=65536 net.inet.tcp.recvspace=65536 net.inet.tcp.rfc1323=1 kern.ipc.maxsockbuf=262144 net.inet.tcp.blackhole=2 net.inet.udp.blackhole=1 net.inet.tcp.msl=7500 net.inet.icmp.icmplim=400 net.inet.tcp.drop_synfin=1 net.inet.tcp.icmp_may_rst=0 net.inet.tcp.fast_finwait2_recycle=1 -netstat -m: 4677/6603/11280 mbufs in use (current/cache/total) 1017/2643/3660/65536 mbuf clusters in use (current/cache/total/max) 1017/1961 mbuf+clusters out of packet secondary zone in use (current/ cache) 9/514/523/12800 4k (page size) jumbo clusters in use (current/cache/ total/max) 0/0/0/6400 9k jumbo clusters in use (current/cache/total/max) 0/0/0/3200 16k jumbo clusters in use (current/cache/total/max) 3239K/8992K/12232K bytes allocated to network (current/cache/total) 0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters) 0/0/0 requests for jumbo clusters denied (4k/9k/16k) 0/0/0 sfbufs in use (current/peak/max) 0 requests for sfbufs denied 0 requests for sfbufs delayed 9204 requests for I/O initiated by sendfile 0 calls to protocol drain routines nginx is not running any accept filters. Locally, after sending an HTTP request, I get a normal connection close, then one RST with sequence 1, then another (possibly more than one) RST with sequence 2. I can post a tcpdump sequence if necessary, after I sanitize some cookies away.