Date: Fri, 13 Jul 2012 18:29:21 +0200 From: Fabian Keil <freebsd-listen@fabiankeil.de> To: Dennis Glatting <freebsd@pki2.com> Cc: freebsd-fs@freebsd.org Subject: Re: ZFS hanging Message-ID: <20120713182921.55f16f4b@fabiankeil.de> In-Reply-To: <1342193136.60708.16.camel@btw.pki2.com> References: <1341864787.32803.43.camel@btw.pki2.com> <20120712151541.7f3a6886@fabiankeil.de> <CAEJYa-RFuJKTOd3_Ykj3Z8KYPuYoQUwiwOr5i37C0FeZ2MUsvw@mail.gmail.com> <20120713170632.065e650e@fabiankeil.de> <1342193136.60708.16.camel@btw.pki2.com>
index | next in thread | previous in thread | raw e-mail
[-- Attachment #1 --] Dennis Glatting <freebsd@pki2.com> wrote: > On Fri, 2012-07-13 at 17:06 +0200, Fabian Keil wrote: > > Lytochkin Boris <lytboris@gmail.com> wrote: > > > > > On Thu, Jul 12, 2012 at 5:15 PM, Fabian Keil > > > <freebsd-listen@fabiankeil.de> wrote: > > > > fk@r500 ~ $zpool status > > > > load: 0.15 cmd: zpool 2698 [spa_namespace_lock] 543.23r 0.00u 0.12s 0% 2908k > > > > > > This sounds familiar with http://www.freebsd.org/cgi/query-pr.cgi?pr=163770 > > > Try playing with kern.maxvnodes. > > > > Thanks for the suggestion, but the system is my laptop and I already > > set kern.maxvnodes=400000 which I suspect is more than I'll ever need. > > > > Currently I uses less than a tenth of this, but I'll keep an eye on > > it the next time the issue occurs. > > > > I usually reach this deadlock after losing the vdev in a single-vdev pool. > > My suspicion is that the deadlock is caused by some kind of "failure to > > communicate" between ZFS and the various geom layers involved. > > > > I already know that losing vdevs with the pool configuration I use > > can cause http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/162010 > > and http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/162036 and I > > suspect that the deadlock is just another symptom of the same issue. Just to be clear: I meant the spa_namespace_lock deadlock on my system, not the one that started this thread. > What is the math and constraints behind kern.maxvnodes and how would a > reasonable value be chosen? The kernel already chooses a reasonable value for you and usually there's no reason to overwrite it. You can find the kernel's math at http://fxr.watson.org/fxr/source/kern/vfs_subr.c#L284 (ff). > On some of my systems (default): > > > iirc# sysctl -a | grep kern.maxvnodes > kern.maxvnodes: 1097048 You can compare this with vfs.numvnodes and vfs.freevnodes if you like (which of course depend on the load), but so far I don't remember seeing any indication that your problem has anything to do with maxvnodes (or block sizes for that matter). Fabian [-- Attachment #2 --] -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (FreeBSD) iEYEARECAAYFAlAATOMACgkQBYqIVf93VJ1quQCbB5ly7VH4deOwZ32Tg6KsxP0i qrQAn0F0NfKxWEPF9HseLXJbacBuW/vO =L2xq -----END PGP SIGNATURE-----home | help
Want to link to this message? Use this
URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20120713182921.55f16f4b>
