From owner-freebsd-net@FreeBSD.ORG Mon Mar 11 00:04:08 2013 Return-Path: Delivered-To: freebsd-net@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 5962CE7C; Mon, 11 Mar 2013 00:04:08 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id E51E885B; Mon, 11 Mar 2013 00:04:07 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqEEAEIfPVGDaFvO/2dsb2JhbAA5CogavCuBYnSCJgEBAQMBAQEBIAQnIAsbGAICDRkCIwYBCSYGCAcEARwEh2ADCQYMqgyHdg2JW4EjiyOBDQp9NAeCLYETA5QXXoFggR6KPoUZgyYgMoEFNQ X-IronPort-AV: E=Sophos;i="4.84,819,1355115600"; d="scan'208";a="18016060" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-annu.net.uoguelph.ca with ESMTP; 10 Mar 2013 20:04:06 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 5E611B4047; Sun, 10 Mar 2013 20:04:06 -0400 (EDT) Date: Sun, 10 Mar 2013 20:04:06 -0400 (EDT) From: Rick Macklem To: Andre Oppermann Message-ID: <564543211.3749080.1362960246372.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <513D07A4.4010503@freebsd.org> Subject: Re: Limits on jumbo mbuf cluster allocation MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.201] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - FF3.0 (Win)/6.0.10_GA_2692) Cc: freebsd-net@freebsd.org, Jack Vogel , Garrett Wollman X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 11 Mar 2013 00:04:08 -0000 Andre Oppermann wrote: > On 10.03.2013 07:04, Garrett Wollman wrote: > > < > > said: > > > >> Yes, in the past the code was in this form, it should work fine > >> Garrett, > >> just make sure > >> the 4K pool is large enough. > > > > [Andre Oppermann's patch:] > >>> if (adapter->max_frame_size <= 2048) > > adapter-> rx_mbuf_sz = MCLBYTES; > >>> - else if (adapter->max_frame_size <= 4096) > >>> + else > > adapter-> rx_mbuf_sz = MJUMPAGESIZE; > >>> - else if (adapter->max_frame_size <= 9216) > >>> - adapter->rx_mbuf_sz = MJUM9BYTES; > >>> - else > >>> - adapter->rx_mbuf_sz = MJUM16BYTES; > > > > So I tried exactly this, and it certainly worked insofar as only 4k > > clusters were allocated, but NFS performance went down precipitously > > (to fewer than 100 ops/s where normally it would be doing 2000 > > ops/s). I took a tcpdump while it was in this state, which I will > > try > > to make some sense of when I get back to the office. (It wasn't > > livelocked; in fact, the server was mostly idle, but responses would > > take seconds rather than milliseconds -- assuming the client could > > even successfully mount the server at all, which the Debian > > automounter frequently refused to do.) > > This is very weird and unlikely to come from the 4k mbufs by itself > considering they are in heavy use in the write() path. Such a high > delay smells like an issue in either the driver dealing with multiple > mbufs per packet or nfs having a problem with it. > I am not aware of anything within the NFS server that would care. The code simply believes the m_len field. --> However, this is a good way to reduce server load. At 100ops/sec I'd think you shouldn't have any server resource exhaustion issues. --> Problem solved ;-);-) rick > > I ended up reverting back to the old kernel (which I managed to lose > > the sources for), and once I get my second server up, I will try to > > do > > some more testing to see if I can identify the source of the > > problem. > > -- > Andre > > _______________________________________________ > freebsd-net@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"