From owner-freebsd-stable@freebsd.org Mon Sep 19 22:08:15 2016 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id C4448BE097A for ; Mon, 19 Sep 2016 22:08:15 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from zxy.spb.ru (zxy.spb.ru [195.70.199.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 87DDC942 for ; Mon, 19 Sep 2016 22:08:15 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from slw by zxy.spb.ru with local (Exim 4.86 (FreeBSD)) (envelope-from ) id 1bm6jU-0003px-Bw; Tue, 20 Sep 2016 01:08:12 +0300 Date: Tue, 20 Sep 2016 01:08:12 +0300 From: Slawa Olhovchenkov To: Lyndon Nerenberg Cc: "Dean E. Weimer" , FreeBSD Stable Subject: Re: LAGG and Jumbo Frames Message-ID: <20160919220812.GG2960@zxy.spb.ru> References: <48926c6013f938af832c17e4ad10b232@dweimer.net> <04c9065ee4a780c6f8986d1b204c4198@dweimer.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.24 (2015-08-30) X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: slw@zxy.spb.ru X-SA-Exim-Scanned: No (on zxy.spb.ru); SAEximRunCond expanded to false X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2016 22:08:15 -0000 On Mon, Sep 19, 2016 at 02:28:56PM -0700, Lyndon Nerenberg wrote: > > Everything on physical Ethernet has support for it Including the LAN > > interface of Firewall, and talks to it just fine over a single interface with > > Jumbo frames enabled. > > Well, before you get too carried away, try this: > > 1) Run a ttcp test between a pair of local hosts using the exiting > jumboframes (pick two that you expect high volume traffic between). > > 2) Run the same test, but with the default MTU. > > If you don't see a very visible difference in throughput (e.g. >15%), it's > not worth the hassle. > > Just as a datapoint, we're running 10-gigE off some low-end Supermicro > boxes with 10.3-RELEASE. Using the default MTU we're getting > 750 MB/s > TCP throughput. I can't believe that you won't be able to fully saturate > a 1 Gb/s link running the default MTU on anything with more oomph than a > dual-core 32-bit Atom. > > IOW, don't micro-optimize. Life's too short ... May be surprised, but jumbo frames can degrade performance for not direct connected host, i.e. multiple switch between host: [hostA]=[SW1]=[SW2]=[SW3]=[hostB] This is because RTT of this link for jumbo frames higher 1500 bytes frame for store-and-forward switch chain.