From owner-freebsd-net@FreeBSD.ORG Fri Dec 3 22:05:35 2010 Return-Path: Delivered-To: freebsd-net@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 0195D106566C for ; Fri, 3 Dec 2010 22:05:35 +0000 (UTC) (envelope-from jfvogel@gmail.com) Received: from mail-wy0-f182.google.com (mail-wy0-f182.google.com [74.125.82.182]) by mx1.freebsd.org (Postfix) with ESMTP id 7DE988FC14 for ; Fri, 3 Dec 2010 22:05:34 +0000 (UTC) Received: by wyf19 with SMTP id 19so9982751wyf.13 for ; Fri, 03 Dec 2010 14:05:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:in-reply-to :references:date:message-id:subject:from:to:cc:content-type; bh=yDmoVKMtTEk+VgWckpSslMVbaIjyFApUT5Hxg3+SujU=; b=QwWYQDWOQ1XUqgnrHP062c/yDI9xKcDSxYD8AsH+2UdyXATDteMCBkNCjsMJA+Bv5r qAKF9gcg57GLBim6JZTX7mZG5oRAPMH4D5jiROxxj+UH4+gi+HB2XkXSbiMD+VtLaLMU Llt1vYCUCmwT5w88MQHrUX/+dhkJ6ORK8AzH4= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=U4ZX9s9FMEyKFj1Bln8OJSQhUfeB3hPqWUZZLWcdO5xbzLE2+N3Jih7P8Rb0XUzS9x xA96bxhZrmF57TRv20MnesiIu7pOrWUtSU5HeQfz6BUIeNi0l3sgHhNZ8ViSeeYKfNyd jHDrWpkr9qj8XiInTTuTJRR3s+gr9GlPTuC/s= MIME-Version: 1.0 Received: by 10.216.154.8 with SMTP id g8mr1198187wek.12.1291413932973; Fri, 03 Dec 2010 14:05:32 -0800 (PST) Received: by 10.216.2.206 with HTTP; Fri, 3 Dec 2010 14:05:32 -0800 (PST) In-Reply-To: <4CF93E43.8010801@tomjudge.com> References: <4CF93E43.8010801@tomjudge.com> Date: Fri, 3 Dec 2010 14:05:32 -0800 Message-ID: From: Jack Vogel To: Tom Judge Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-net@freebsd.org Subject: Re: igb and jumbo frames X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 03 Dec 2010 22:05:35 -0000 Since you're already configuring the system into a special non-standard way you are playing the admin, so I'd expect you to also configure memory pool resources, not to have the driver do so. Its also going to depend on the number of queues you have, you can reduce those manually as well. I'm glad you're trying this out however, the 9K cluster use is new, and not uncontroversial either, I decided to put it in, but if problems occur, or someone has a strong valid-sounding argument for not using them, I could be persuaded to take it our and just use 2K and 4K sizes. So... any feedback is good right now. Jack On Fri, Dec 3, 2010 at 11:00 AM, Tom Judge wrote: > Hi, > > So I have been playing around with some new hosts I have been deploying > (Dell R710's). > > The systems have a single dual port card in them: > > igb0@pci0:5:0:0: class=0x020000 card=0xa04c8086 chip=0x10c98086 > rev=0x01 hdr=0x00 > vendor = 'Intel Corporation' > class = network > subclass = ethernet > cap 01[40] = powerspec 3 supports D0 D3 current D0 > cap 05[50] = MSI supports 1 message, 64 bit, vector masks > cap 11[70] = MSI-X supports 10 messages in map 0x1c enabled > cap 10[a0] = PCI-Express 2 endpoint max data 256(512) link x4(x4) > igb1@pci0:5:0:1: class=0x020000 card=0xa04c8086 chip=0x10c98086 > rev=0x01 hdr=0x00 > vendor = 'Intel Corporation' > class = network > subclass = ethernet > cap 01[40] = powerspec 3 supports D0 D3 current D0 > cap 05[50] = MSI supports 1 message, 64 bit, vector masks > cap 11[70] = MSI-X supports 10 messages in map 0x1c enabled > cap 10[a0] = PCI-Express 2 endpoint max data 256(512) link x4(x4) > > > Running 8.1 these cards panic the system at boot when initializing the > jumbo mtu, so to solve this I back ported the stable/8 driver to 8.1 and > booted with this kernel. So far so good. > > However when configuring the interfaces with and mtu of 8192 the system > is unable to allocate the required mbufs for the receive queue. > > I believe the message was from here: > http://fxr.watson.org/fxr/source/dev/e1000/if_igb.c#L1209 > > After a little digging and playing with just one interface i discovered > that the default tuning for kern.ipc.nmbjumbo9 was insufficient to run a > single interface with jumbo frames as it seemed just the TX queue > consumed 90% of the available 9k jumbo clusters. > > So my question is (well 2 questions really): > > 1) Should igb be auto tuning kern.ipc.nmbjumbo9 and kern.ipc.nmbclusters > up to suite its needs? > > 2) Should this be documented in igb(4)? > > Tom > > -- > TJU13-ARIN >