From owner-freebsd-fs@FreeBSD.ORG Fri Jul 13 15:25:46 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id ADED41065673 for ; Fri, 13 Jul 2012 15:25:46 +0000 (UTC) (envelope-from freebsd@pki2.com) Received: from btw.pki2.com (btw.pki2.com [IPv6:2001:470:a:6fd::2]) by mx1.freebsd.org (Postfix) with ESMTP id 5E5308FC19 for ; Fri, 13 Jul 2012 15:25:46 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by btw.pki2.com (8.14.5/8.14.5) with ESMTP id q6DFPavZ016136; Fri, 13 Jul 2012 08:25:36 -0700 (PDT) (envelope-from freebsd@pki2.com) From: Dennis Glatting To: freebsd-fs@freebsd.org In-Reply-To: <20120713170632.065e650e@fabiankeil.de> References: <1341864787.32803.43.camel@btw.pki2.com> <20120712151541.7f3a6886@fabiankeil.de> <20120713170632.065e650e@fabiankeil.de> Content-Type: text/plain; charset="ISO-8859-1" Date: Fri, 13 Jul 2012 08:25:36 -0700 Message-ID: <1342193136.60708.16.camel@btw.pki2.com> Mime-Version: 1.0 X-Mailer: Evolution 2.32.1 FreeBSD GNOME Team Port Content-Transfer-Encoding: 7bit X-yoursite-MailScanner-Information: Dennis Glatting X-yoursite-MailScanner-ID: q6DFPavZ016136 X-yoursite-MailScanner: Found to be clean X-MailScanner-From: freebsd@pki2.com Cc: Subject: Re: ZFS hanging X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 13 Jul 2012 15:25:46 -0000 On Fri, 2012-07-13 at 17:06 +0200, Fabian Keil wrote: > Lytochkin Boris wrote: > > > On Thu, Jul 12, 2012 at 5:15 PM, Fabian Keil > > wrote: > > > fk@r500 ~ $zpool status > > > load: 0.15 cmd: zpool 2698 [spa_namespace_lock] 543.23r 0.00u 0.12s 0% 2908k > > > > This sounds familiar with http://www.freebsd.org/cgi/query-pr.cgi?pr=163770 > > Try playing with kern.maxvnodes. > > Thanks for the suggestion, but the system is my laptop and I already > set kern.maxvnodes=400000 which I suspect is more than I'll ever need. > > Currently I uses less than a tenth of this, but I'll keep an eye on > it the next time the issue occurs. > > I usually reach this deadlock after losing the vdev in a single-vdev pool. > My suspicion is that the deadlock is caused by some kind of "failure to > communicate" between ZFS and the various geom layers involved. > > I already know that losing vdevs with the pool configuration I use > can cause http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/162010 > and http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/162036 and I > suspect that the deadlock is just another symptom of the same issue. > What is the math and constraints behind kern.maxvnodes and how would a reasonable value be chosen? On some of my systems (default): iirc# sysctl -a | grep kern.maxvnodes kern.maxvnodes: 1097048 bd3# sysctl -a | grep kern.maxvnodes kern.maxvnodes: 587825 mc# sysctl -a | grep kern.maxvnodes kern.maxvnodes: 2112911 btw# sysctl -a | grep kern.maxvnodes kern.maxvnodes: 460985 > Fabian