From owner-freebsd-current@FreeBSD.ORG Fri Jul 4 10:33:24 2014 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 56F14C03 for ; Fri, 4 Jul 2014 10:33:24 +0000 (UTC) Received: from zxy.spb.ru (zxy.spb.ru [195.70.199.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 0BC5C2A36 for ; Fri, 4 Jul 2014 10:33:23 +0000 (UTC) Received: from slw by zxy.spb.ru with local (Exim 4.82 (FreeBSD)) (envelope-from ) id 1X30nr-000GYa-PT; Fri, 04 Jul 2014 14:33:15 +0400 Date: Fri, 4 Jul 2014 14:33:15 +0400 From: Slawa Olhovchenkov To: Luigi Rizzo Subject: Re: FreeBSD iscsi target Message-ID: <20140704103315.GW5102@zxy.spb.ru> References: <20140702112609.GA85758@zxy.spb.ru> <20140702203603.GO5102@zxy.spb.ru> <20140703091321.GP5102@zxy.spb.ru> <20140704101626.GB58753@zxy.spb.ru> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: slw@zxy.spb.ru X-SA-Exim-Scanned: No (on zxy.spb.ru); SAEximRunCond expanded to false Cc: Kevin Oberman , Sreenivasa Honnur , FreeBSD Current , Nikolay Denev X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 04 Jul 2014 10:33:24 -0000 On Fri, Jul 04, 2014 at 12:25:35PM +0200, Luigi Rizzo wrote: > On Fri, Jul 4, 2014 at 12:16 PM, Slawa Olhovchenkov wrote: > > > On Thu, Jul 03, 2014 at 08:39:42PM -0700, Kevin Oberman wrote: > > > > > > > > > > In real world "Reality is quite different than it actually is". > > > > > > > > > > http://www.cisco.com/c/en/us/products/collateral/switches/catalyst-6500-series-switches/white_paper_c11-696669.html > > > > > > > > See "Packet Path Theory of Operation. Ingress Mode". > > > > > > > > > > > Yep. It is really crappy LAGG (fixed three-tupple hash... yuck!) and is > > > really nothing but 4 10G Ethernet ports using a 40G PHY in yhe 4x10G > > form. > > > > > > Note that they don't make any claim of 802.3ba compliance. It only states > > > that "40 Gigabit Ethernet is now part of the IEEE 802.3ba standard." So > > it > > > is, but this device almost certainly predates the completion of the > > > standard to get a product for which there was great demand. It's a data > > > center product and for typical cases of large numbers of small flow, it > > > should do the trick. Probably does not interoperate with true 80-2.3ba > > > hardware, either. > > > > > > My boss at the time I retired last November was on the committee that > > wrote > > > 802.3ba. He would be a good authority on whether the standard has any > > vague > > > wording that would allow this, but he retired 5 month after I did and I > > > have no contact information for him. But I'm pretty sure that there is no > > > way that this is legitimate 40G Ethernet. > > > > 802.3ba describe only end point of ethernet. > > ASIC, internal details of implemetations NICs, switches, fabrics -- > > out of standart scope. > > Bottleneck can be in any point of packet flow. > > In first pappers of netmap test demonstarated NIC can't do saturation > > of 10G in one stream 64 bytes packet -- need use multiple rings for > > transmit. > > > > ?that was actually just a configuration issue which since then > has been ?resolved. The 82599 can do 14.88 Mpps on a single ring > (and is the only 10G nic i have encountered who can do so). Thanks for clarification. > Besides, performance with short packets has nothing to do with the case > you were discussing, namely throughput for a single large flow. This is only illustration about hardware limitation. Perforamnce may be not only bandwidth limited, but interrupt/pps (per flow) limited. > > I think need use general rule: one flow transfer can hit performance > > limitation. > > > > ?This is neither a useful nor it is restricted to a single flow. > > Everything "can" underperform depending > on the hw/sw configuration, but not necessarily has to. Yes. And estimate to ideal hw/sw configuration and enviroment -- bad think.