From owner-freebsd-net@FreeBSD.ORG Sun Nov 13 21:54:36 2011 Return-Path: Delivered-To: freebsd-net@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E98DA106564A for ; Sun, 13 Nov 2011 21:54:36 +0000 (UTC) (envelope-from nitroboost@gmail.com) Received: from mail-yx0-f182.google.com (mail-yx0-f182.google.com [209.85.213.182]) by mx1.freebsd.org (Postfix) with ESMTP id A7F0F8FC08 for ; Sun, 13 Nov 2011 21:54:36 +0000 (UTC) Received: by yenl11 with SMTP id l11so660115yen.13 for ; Sun, 13 Nov 2011 13:54:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=XT+ELrr/azINNaf8c73paLfBrC4ZnCdTWeGyx1rFs2I=; b=krfDBCL1l7xfKYBOrHz8uI0viUfHF80t3M7dUWoUXwOaBoKCTKxnHCxb5mFrKXdH9r jv7ExJM7plftoq5MwTe9V8qnqc9FNyckzGXkA5NbcYPoIrHZZbVbxO0xofQ6fE+kQQ0Y W1pJVmUskIUKwcBvs7fVbXG7LOMZXBXl2Y7G4= MIME-Version: 1.0 Received: by 10.182.108.100 with SMTP id hj4mr4462818obb.34.1321221275767; Sun, 13 Nov 2011 13:54:35 -0800 (PST) Received: by 10.182.30.164 with HTTP; Sun, 13 Nov 2011 13:54:35 -0800 (PST) In-Reply-To: References: <4EC033B7.5080609@soe.ucsc.edu> Date: Sun, 13 Nov 2011 14:54:35 -0700 Message-ID: From: Jason Wolfe To: Erich Weiler Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-net@freebsd.org Subject: Re: Arg. TCP slow start killing me. X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 13 Nov 2011 21:54:37 -0000 Erich, Forgot to mention net.inet.tcp.delayed_ack can be a detriment in latent paths, might try setting it to 0 to see if it improves your throughput. Jason Wolfe On Sun, Nov 13, 2011 at 2:48 PM, Jason Wolfe wrote: > Erich, > > Slow start is actually just the initial ramp up limited by RFC 3390 being > enabled by default (usually 3/4 packets), but this is only the case for the > first few seconds of the stream. You can effectively speed that up with > something like this though: > > net.inet.tcp.rfc3390=0 > net.inet.tcp.slowstart_flightsize=10 > net.inet.tcp.sendspace=262144 > net.inet.tcp.recvspace=262144 > > The first 2 allow 10 packets to be sent before an ACK, and the 2nd 2 just > bump as the starting window size. With your memory and the massive max you > set no reason to force them to slowly step up from such a low initial size. > Looks like the numbers you used for initial are actually the default > increment/step size of the window growth. > > Also since you mentioned latency playing a factor here, try this sysctl. > If overruns are an issue you'll likely see a bit of an increase in > retransmits, but could potentially show a sizable positive impact in the > saw tooth. > > net.inet.tcp.inflight.enable=0 > > Is it possible to upgrade to 8.2-STABLE? Cubic has shown some really > great improvement in my latent paths, a steady 10% overall increase in same > cases. > > Jason Wolfe > > On Sun, Nov 13, 2011 at 2:16 PM, Erich Weiler wrote: > >> So, I have a FreeBSD 8.1 box that I'm using as a firewall (pfSense 2.0 >> really, which uses 8.1 as a base), and I'm filtering packets inbound and >> I'm seeing a typical sawtooth pattern where I get high bandwidth, then a >> packet drops somewhere, and the TCP connections back off a *lot*, then >> slowly get faster, then backoff, etc. These are all higher latency WAN >> connections. >> >> I get an average of 1.5 - 2.0 Gb/s incoming, but I see it spike to like >> 3Gb/s every once in a while, then drop again. I'm trying to maintain that >> 3Gb/s for as long as possible between it dropping. >> >> Given that 8.1 does not have the more advanced TCP congestion algorithms >> like cubic and H-TPC that might help that to some degree, I'm trying to >> "fake it". ;) >> >> My box has 24GB RAM on it. Is there some tunable I can set that would >> effectively buffer incoming packets, even though the buffers would >> eventually fill up, just to "delay" the TCP dropped packet signal telling >> the hosts on the internet to back off? Like, could I effectively buffer >> 10GB of packets in the queue before it sent the backoff signal? Would >> setting kern.ipc.nmbclusters or something similar help? >> >> Right now I have: >> >> loader.conf.local: >> >> vm.kmem_size_max=12G >> vm.kmem_size=10G >> >> sysctl.conf: >> >> kern.ipc.maxsockbuf=16777216 >> kern.ipc.nmbclusters=262144 >> net.inet.tcp.recvbuf_max=16777216 >> net.inet.tcp.recvspace=8192 >> net.inet.tcp.sendbuf_max=16777216 >> net.inet.tcp.sendspace=16384 >> >> I guess the goal is to keep the bandwidth high without dropoffs for as >> long as possible, with out as many TCP resets on the streams. >> >> Any help much appreciated! I'm probably missing a key point, but that's >> why I'm posting to the list. ;) >> >> cheers, >> erich >> >> _______________________________________________ >> freebsd-net@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-net >> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" >> > >