From owner-freebsd-stable@freebsd.org Mon Aug 17 11:39:27 2015 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 4908C9BB901; Mon, 17 Aug 2015 11:39:27 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from zxy.spb.ru (zxy.spb.ru [195.70.199.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id F0F8F16A8; Mon, 17 Aug 2015 11:39:26 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from slw by zxy.spb.ru with local (Exim 4.84 (FreeBSD)) (envelope-from ) id 1ZRIl9-000K58-9v; Mon, 17 Aug 2015 14:39:23 +0300 Date: Mon, 17 Aug 2015 14:39:23 +0300 From: Slawa Olhovchenkov To: Daniel Braniss Cc: FreeBSD stable , FreeBSD Net Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance Message-ID: <20150817113923.GK1872@zxy.spb.ru> References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <20150817094145.GB3158@zxy.spb.ru> <197995E2-0C11-43A2-AB30-FBB0FB8CE2C5@cs.huji.ac.il> MIME-Version: 1.0 Content-Type: text/plain; charset=koi8-r Content-Disposition: inline In-Reply-To: <197995E2-0C11-43A2-AB30-FBB0FB8CE2C5@cs.huji.ac.il> User-Agent: Mutt/1.5.23 (2014-03-12) X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: slw@zxy.spb.ru X-SA-Exim-Scanned: No (on zxy.spb.ru); SAEximRunCond expanded to false X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Aug 2015 11:39:27 -0000 On Mon, Aug 17, 2015 at 01:35:06PM +0300, Daniel Braniss wrote: > > > On Aug 17, 2015, at 12:41 PM, Slawa Olhovchenkov wrote: > > > > On Mon, Aug 17, 2015 at 10:27:41AM +0300, Daniel Braniss wrote: > > > >> hi, > >> I have a host (Dell R730) with both cards, connected to an HP8200 switch at 10Gb. > >> when writing to the same storage (netapp) this is what I get: > >> ix0: ~130MGB/s > >> mlxen0 ~330MGB/s > >> this is via nfs/tcpv3 > >> > >> I can get similar (bad) performance with the mellanox if I increase the file size > >> to 512MGB. > > > > Look like mellanox have internal beffer for caching and do ACK acclerating. > what ever they are doing, it's impressive :-) > > > > >> so at face value, it seems the mlxen does a better use of resources than the intel. > >> Any ideas how to improve ix/intel's performance? > > > > Are you sure about netapp performance? > > yes, and why should it act differently if the request is coming from the same host? in any case > the numbers are quiet consistent since I have measured it from several hosts, and at different times. In any case, for 10Gb expect about 1200MGB/s. I see lesser speed. What netapp maximum performance? From other hosts, or local, any?