From owner-freebsd-net@FreeBSD.ORG Tue Apr 28 21:06:10 2015 Return-Path: Delivered-To: freebsd-net@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A9C645DB; Tue, 28 Apr 2015 21:06:10 +0000 (UTC) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 61D1916C8; Tue, 28 Apr 2015 21:06:09 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2CWBwBj9T9V/95baINcg19cBYMVxVMMgjGDUQKBdxEBAQEBAQEBgQqEIAEBAQMBAQIgTwcFFhgCAg0ZAiovBhOIIwgNsz6UBQEBAQEBAQEDAQEBAQEBAQEagSGKF4QiCwYBBhcBMweCLTsSgTMFlWmEC1+CdJB5g1AjgWWCKyIxAYECCBcigQEBAQE X-IronPort-AV: E=Sophos;i="5.11,666,1422939600"; d="scan'208";a="208465198" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-annu.net.uoguelph.ca with ESMTP; 28 Apr 2015 17:06:02 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 6493DB3F94; Tue, 28 Apr 2015 17:06:02 -0400 (EDT) Date: Tue, 28 Apr 2015 17:06:02 -0400 (EDT) From: Rick Macklem To: Mark Schouten Cc: freebsd-net@FreeBSD.org, Garrett Wollman Message-ID: <137094161.27589033.1430255162390.JavaMail.root@uoguelph.ca> In-Reply-To: <4281350517-9417@kerio.tuxis.nl> Subject: Re: Frequent hickups on the networking layer MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [172.17.95.11] X-Mailer: Zimbra 7.2.6_GA_2926 (ZimbraWebClient - FF3.0 (Win)/7.2.6_GA_2926) X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 28 Apr 2015 21:06:10 -0000 Mark Schouten wrote: > Hi, >=20 >=20 > I've got a FreeBSD 10.1-RELEASE box running with iscsi on top of ZFS. > I've had some major issues with it where it would stop processing > traffic for a minute or two, but that's 'fixed' by disabling TSO. I > do have frequent iscsi errors, which are luckily fixed on the iscsi > layer, but they do cause an occasional errormessage on both the > iscsi client and server. Also, I see input errors on the FreeBSD > server, but I'm unable to find out what those are. I do see a > relation between iscsi-errormessages and the number of ethernet > inputerrors on the server. >=20 >=20 > I saw this message [1] which made me have a look at `vmstat -z`, and > that shows me the following: >=20 >=20 > vmstat -z | head -n 1; vmstat -z | sort -k 6 -t , | tail -10 ITEM > SIZE LIMIT USED FREE REQ FAIL SLEEP > zio_data_buf_94208: 94208, 0, 162, 5, 135632, 0, > 0 zio_data_buf_98304: 98304, 0, 118, 9, 101606, > 0, 0 zio_link_cache: 48, 0, 6, > 30870,24853549414, 0, 0 8 Bucket: 64, 0, > 145, 2831,148672720, 11, 0 32 Bucket: 256, > 0, 859, 731,231513474, 52, 0 mbuf_jumbo_9k: > 9216, 604528, 7230, 2002,11764806459,108298123, 0 64 > Bucket: 512, 0, 808, > 352,147120342,16375582, 0 256 Bucket: 2048, 0, > 500, 50,307051808,189685088, 0 vmem btag: > 56, 0, 1671605, 1291509,198933250,36431, 0 128 > Bucket: 1024, 0, 410, 106,65267164,772374, > 0 >=20 >=20 > I am using jumboframes. Could it be that the inputerrors AND my > frequent hickups come from all those failures to allocate 9k jumbo > mbufs? There have been email list threads discussing how allocating 9K jumbo mbufs will fragment the KVM (kernel virtual memory) used for mbuf cluster allocation and cause grief. If your net device driver is one that allocates 9K jumbo mbufs for receive instead of using a list of smaller mbuf clusters, I'd guess this is what is biting you. As far as I know (just from email discussion, never used them myself), you can either stop using jumbo packets or switch to a different net interface that doesn't allocate 9K jumbo mbufs (doing the receives of jumbo packets into a list of smaller mbuf clusters). I remember Garrett Wollman arguing that 9K mbuf clusters shouldn't ever be used. I've cc'd him, in case he wants to comment. I don't know how to increase the KVM that the allocator can use for 9K mbuf clusters nor do I know if that can be used as a work around. rick > And can I increase the in [1] mentioned sysctls at will? >=20 >=20 > Thanks >=20 >=20 >=20 >=20 >=20 >=20 > [1]: > https://lists.freebsd.org/pipermail/freebsd-questions/2013-August/252827.= html >=20 >=20 > Met vriendelijke groeten, >=20 > -- > Kerio Operator in de Cloud? https://www.kerioindecloud.nl/ > Mark Schouten | Tuxis Internet Engineering > KvK:=C2=A061527076=C2=A0| http://www.tuxis.nl/ > T: 0318 200208 | info@tuxis.nl