From owner-freebsd-stable@FreeBSD.ORG Fri Feb 26 09:32:50 2010 Return-Path: Delivered-To: stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 5907B1065670 for ; Fri, 26 Feb 2010 09:32:50 +0000 (UTC) (envelope-from wjw@digiware.nl) Received: from mail.digiware.nl (mail.ip6.digiware.nl [IPv6:2001:4cb8:1:106::2]) by mx1.freebsd.org (Postfix) with ESMTP id E87708FC1D for ; Fri, 26 Feb 2010 09:32:49 +0000 (UTC) Received: from localhost (localhost.digiware.nl [127.0.0.1]) by mail.digiware.nl (Postfix) with ESMTP id 68973153433; Fri, 26 Feb 2010 10:32:48 +0100 (CET) X-Virus-Scanned: amavisd-new at digiware.nl Received: from mail.digiware.nl ([127.0.0.1]) by localhost (rack1.digiware.nl [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Zlo2BC3hqDM7; Fri, 26 Feb 2010 10:32:46 +0100 (CET) Received: from [192.168.10.67] (opteron [192.168.10.67]) by mail.digiware.nl (Postfix) with ESMTP id 700CB15342F; Fri, 26 Feb 2010 10:32:46 +0100 (CET) Message-ID: <4B8795B1.4020006@digiware.nl> Date: Fri, 26 Feb 2010 10:34:41 +0100 From: Willem Jan Withagen Organization: Digiware User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.7) Gecko/20100111 Thunderbird/3.0.1 MIME-Version: 1.0 To: Jack Vogel References: <4B86F384.3010308@digiware.nl> <2a41acea1002251459v40e8c6ddxd0437decbada4594@mail.gmail.com> In-Reply-To: <2a41acea1002251459v40e8c6ddxd0437decbada4594@mail.gmail.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: stable@freebsd.org Subject: Re: em0 freezes on ZFS server X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 26 Feb 2010 09:32:50 -0000 On 25-2-2010 23:59, Jack Vogel wrote: > The failure to "setup receive structures" means it did not have sufficient > mbufs > to setup the RX ring and buffer structs. Not sure why this results in a > lockup, > but try and increase kern.ipc.nmbclusters. > > Let me know what happens, I've doubled the value 25600 => 51200. This is wat netstat -m told me when it refused to revive em0: 24980/2087/27067 mbufs in use (current/cache/total) 24530/1070/25600/25600 mbuf clusters in use (current/cache/total/max) 22217/741 mbuf+clusters out of packet secondary zone in use (current/cache) 0/35/35/12800 4k (page size) jumbo clusters in use (current/cache/total/max) 0/0/0/6400 9k jumbo clusters in use (current/cache/total/max) 0/0/0/3200 16k jumbo clusters in use (current/cache/total/max) 55305K/2801K/58106K bytes allocated to network (current/cache/total) 0/5970/2983 requests for mbufs denied (mbufs/clusters/mbuf+clusters) 0/0/0 requests for jumbo clusters denied (4k/9k/16k) 0/0/0 sfbufs in use (current/peak/max) 0 requests for sfbufs denied 0 requests for sfbufs delayed 1011716 requests for I/O initiated by sendfile 0 calls to protocol drain routines Now I've seen some discussion on the list suggesting that full mbuf could also be because the device is down and the queue builds up rather rappidly in the mbufs. Probably the reason why this happened yesterday is that I started doing major software builds (over ZFS/NFS/TCP/v3) against data stored on this box. --WjW