From owner-freebsd-net@FreeBSD.ORG Sun Nov 13 21:16:40 2011 Return-Path: Delivered-To: freebsd-net@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 7E5F51065675 for ; Sun, 13 Nov 2011 21:16:40 +0000 (UTC) (envelope-from weiler@soe.ucsc.edu) Received: from mail-01.cse.ucsc.edu (mail-01.cse.ucsc.edu [128.114.48.32]) by mx1.freebsd.org (Postfix) with ESMTP id 6F40B8FC0A for ; Sun, 13 Nov 2011 21:16:40 +0000 (UTC) Received: from erich-weilers-macbook-pro.local (hgfw-01.soe.ucsc.edu [128.114.58.17]) by mail-01.cse.ucsc.edu (Postfix) with ESMTPSA id 1F4541009CD1 for ; Sun, 13 Nov 2011 13:16:40 -0800 (PST) Message-ID: <4EC033B7.5080609@soe.ucsc.edu> Date: Sun, 13 Nov 2011 13:16:39 -0800 From: Erich Weiler User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:8.0) Gecko/20111105 Thunderbird/8.0 MIME-Version: 1.0 To: freebsd-net@freebsd.org Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Arg. TCP slow start killing me. X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 13 Nov 2011 21:16:40 -0000 So, I have a FreeBSD 8.1 box that I'm using as a firewall (pfSense 2.0 really, which uses 8.1 as a base), and I'm filtering packets inbound and I'm seeing a typical sawtooth pattern where I get high bandwidth, then a packet drops somewhere, and the TCP connections back off a *lot*, then slowly get faster, then backoff, etc. These are all higher latency WAN connections. I get an average of 1.5 - 2.0 Gb/s incoming, but I see it spike to like 3Gb/s every once in a while, then drop again. I'm trying to maintain that 3Gb/s for as long as possible between it dropping. Given that 8.1 does not have the more advanced TCP congestion algorithms like cubic and H-TPC that might help that to some degree, I'm trying to "fake it". ;) My box has 24GB RAM on it. Is there some tunable I can set that would effectively buffer incoming packets, even though the buffers would eventually fill up, just to "delay" the TCP dropped packet signal telling the hosts on the internet to back off? Like, could I effectively buffer 10GB of packets in the queue before it sent the backoff signal? Would setting kern.ipc.nmbclusters or something similar help? Right now I have: loader.conf.local: vm.kmem_size_max=12G vm.kmem_size=10G sysctl.conf: kern.ipc.maxsockbuf=16777216 kern.ipc.nmbclusters=262144 net.inet.tcp.recvbuf_max=16777216 net.inet.tcp.recvspace=8192 net.inet.tcp.sendbuf_max=16777216 net.inet.tcp.sendspace=16384 I guess the goal is to keep the bandwidth high without dropoffs for as long as possible, with out as many TCP resets on the streams. Any help much appreciated! I'm probably missing a key point, but that's why I'm posting to the list. ;) cheers, erich