From owner-freebsd-net@freebsd.org Fri Jun 26 23:54:01 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 1E18498D15F for ; Fri, 26 Jun 2015 23:54:01 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id CC3261966 for ; Fri, 26 Jun 2015 23:53:59 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2CzBABv5Y1V/61jaINbg2VfBoMYu2wKhS5KAoF3EQEBAQEBAQGBCoQiAQEBAwEBAQEgBCceAggDBQsCAQgOCgICDRkCAicBCSYCBAgHBAEcBIgGCA24Z5YYAQEBAQEFAQEBAQEdgSGKKYQkCQcBAQUXATMHgmiBQwWUBIRYhDGEA0KGW4dfhCaDWQImY4MzIjEHfAEIFyOBAgEBAQ X-IronPort-AV: E=Sophos;i="5.13,687,1427774400"; d="scan'208";a="221965204" Received: from nipigon.cs.uoguelph.ca (HELO zcs1.mail.uoguelph.ca) ([131.104.99.173]) by esa-annu.net.uoguelph.ca with ESMTP; 26 Jun 2015 19:53:52 -0400 Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 381A315F533; Fri, 26 Jun 2015 19:53:52 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id vDpDrA2gbE0s; Fri, 26 Jun 2015 19:53:50 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id DEF3015F538; Fri, 26 Jun 2015 19:53:50 -0400 (EDT) X-Virus-Scanned: amavisd-new at zcs1.mail.uoguelph.ca Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id G-kuq0xWSnyI; Fri, 26 Jun 2015 19:53:50 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca (zcs1.mail.uoguelph.ca [172.17.95.18]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id BC62515F533; Fri, 26 Jun 2015 19:53:50 -0400 (EDT) Date: Fri, 26 Jun 2015 19:53:50 -0400 (EDT) From: Rick Macklem To: Damien Fleuriot Cc: Gerrit =?utf-8?B?S8O8aG4=?= , freebsd-net@freebsd.org Message-ID: <1709150198.407064.1435362830724.JavaMail.zimbra@uoguelph.ca> In-Reply-To: References: <20150625145238.12cf9da3b368ef0b9a30f193@aei.mpg.de> <623856025.328424.1435279751389.JavaMail.zimbra@uoguelph.ca> <20150626115943.7d0b441cda2c6cc5b817b181@aei.mpg.de> Subject: Re: NFS on 10G interface terribly slow MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [172.17.95.10] X-Mailer: Zimbra 8.0.9_GA_6191 (ZimbraWebClient - FF34 (Win)/8.0.9_GA_6191) Thread-Topic: NFS on 10G interface terribly slow Thread-Index: +u1d0wvy19GoyxHCUctQtZ/6xzQJGw== X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 26 Jun 2015 23:54:01 -0000 Damien Fleuriot wrote: > Gerrit, >=20 >=20 > Everyone's talking about the network performance and to some extent NFS > tuning. > I would argue that given your iperf results, the network itself is not at > fault. >=20 In this case, I think you might be correct. However, I need to note that NFS traffic is very different than what iperf generates and a good result from iperf does not imply that there isn't a network related problem causing NFS grief. A couple of examples: - NFS generates TSO segments that are sometimes just under 64K in length. If the network interface has TSO enabled but cannot handle a list of 35 or more transmit segments (mbufs in list), this can cause problems. Systems more than about 1year old could fail completely when the TSO segment + IP header exceeded 64K for network interfaces limited to 32 transmit segments (32 * MCLBYTES =3D=3D 64K). Also, some interfaces used m_collapse() to try and fix the case where the TSO segment had too many transmit segments in it and this almost always failed (you need to use m_defrag()). --> The worst case failures have been fixed by reducing the default maximum TSO segment size to slightly less than 64K (by the maximum MAC header length). However, drivers limited to less than 35 transmit segments (which includes at least one of the most common Intel chips) still end up generating a lot of overhead by calling m_defrag() over and over and over again (with the possibility of failure if mbuf clusters become exhausted). --> To fix this well, net device drivers need to set a field called if_hw_tsomaxsegcount, but if you look in -head, you won't find it set in many drivers. (I've posted to freebsd-net multiple times asking the net device driver authors to do this, but it hasn't happen= ed yet.) Usually avoided by disabling TSO. Another failure case I've seen in the past was where a network interface would drop a packet in a stream of closely spaced packets on the receive side while concurrently transmitting. (NFS traffic is bi-directional and it is common to be receiving and transmitting on a TCP socket concurrently.= ) NFS traffic is also very bursty, and that seems to cause problems for certa= in network interfaces. These can usually be worked around by reducing rsize, wsize. (Reducing rsiz= e, wsize also "fixes" the 64K TSO segment problem, since the TSO segments won't be a= s large.) There are also issues w.r.t. kernel address space (the area used for mbuf c= luster mapping) exhaustion when jumbo packets are used, resulting in allocation of multiple sized mbuf clusters. I think you can see not all of these will be evident from iperf results. rick =20 > In your first post I see no information regarding the local performance o= f > your disks, sans le NFS that is. >=20 > You may want to look into that first and ensure you get good read and wri= te > results on the Solaris box, before trying to fix that which might not be = at > fault. > Perhaps your NFS implementation is already giving you the maximum speed t= he > disks can achieve, or close enough. >=20 > You may also want to compare the results with another NFS client to the > Oracle server, say, god forbid, a *nux box for example. >=20 >=20 > On 26 June 2015 at 11:59, Gerrit K=C3=BChn wrot= e: >=20 > > On Thu, 25 Jun 2015 20:49:11 -0400 (EDT) Rick Macklem > > wrote about Re: NFS on 10G interface terribly sl= ow: > > > > > > RM> Recent commits to stable/10 (not in 10.1) done by Alexander Motin > > RM> (mav@) might help w.r.t. write performance (it avoids large writes > > RM> doing synchronous writes when the wcommitsize is exceeded). If you = can > > RM> try stable/10, that might be worth it. > > > > Ok, I'll schedule an update then, I guess. OTOH, Scott reported that a > > similar setup is working fine for him with 10.0 and 10.1, so there is > > probably not much to gain. I'll try anyway... > > > > RM> Otherwise, the main mount option you can try is "wcommitsize", whic= h > > RM> you probably want to make larger. > > > > Hm, which size would you recommend? I cannot find anything about this > > setting, not even what the default value would be. Is this reflected in > > some sysctl, or how can I find out what the actual value is? > > > > > > cu > > Gerrit > > _______________________________________________ > > freebsd-net@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-net > > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" > > > _______________________________________________ > freebsd-net@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"