From owner-freebsd-fs@FreeBSD.ORG Thu Oct 4 20:26:07 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 864BC106566B for ; Thu, 4 Oct 2012 20:26:07 +0000 (UTC) (envelope-from freebsd-listen@fabiankeil.de) Received: from smtprelay02.ispgateway.de (smtprelay02.ispgateway.de [80.67.31.36]) by mx1.freebsd.org (Postfix) with ESMTP id 190228FC0C for ; Thu, 4 Oct 2012 20:26:06 +0000 (UTC) Received: from [78.35.176.207] (helo=fabiankeil.de) by smtprelay02.ispgateway.de with esmtpsa (SSLv3:AES128-SHA:128) (Exim 4.68) (envelope-from ) id 1TJrzD-0002AL-1o for freebsd-fs@freebsd.org; Thu, 04 Oct 2012 22:25:35 +0200 Date: Thu, 4 Oct 2012 22:24:22 +0200 From: Fabian Keil To: freebsd-fs@freebsd.org Message-ID: <20121004222422.68d176ec@fabiankeil.de> In-Reply-To: <5A5FE35F-7D68-4E83-A88D-3002B51F2E00@gmail.com> References: <5A5FE35F-7D68-4E83-A88D-3002B51F2E00@gmail.com> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=PGP-SHA1; boundary="Sig_/eLUGNlqK8CO8QdU4knNukxc"; protocol="application/pgp-signature" X-Df-Sender: Nzc1MDY3 Subject: Re: zpool scrub on pool from geli devices offlines the pool? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Oct 2012 20:26:07 -0000 --Sig_/eLUGNlqK8CO8QdU4knNukxc Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Nikolay Denev wrote: > I have a zfs pool from 24 disks encrypted with geli. >=20 > I just did a zpool scrub tank, and that probably reopened all of the devi= ces, > but this caused geli "detach on last close" to kick in=20 > which resulted in offline pool from UNAVAILABLE devices.=20 This is a known issue: http://www.freebsd.org/cgi/query-pr.cgi?pr=3Dkern/117158 The fact that the system didn't panic seems like an improvement, although this might be the result of the different pool layout. > pool: tank > state: UNAVAIL > status: One or more devices are faulted in response to IO failures. > action: Make sure the affected devices are connected, then run 'zpool cle= ar'. > see: http://illumos.org/msg/ZFS-8000-HC > scan: scrub in progress since Thu Oct 4 21:19:15 2012 > 1 scanned out of 8.29T at 1/s, (scan is slow, no estimated time) > 0 repaired, 0.00% done > config: >=20 > NAME STATE READ WRITE CKSUM > tank UNAVAIL 0 0 0 [...] >=20 > errors: 1 data errors, use '-v' for a list >=20 > Dmesg shows : >=20 > GEOM_ELI: Detached mfid1.eli on last close. > =85 > GEOM_ELI: Detached mfid24.eli on last close. >=20 > I then did /etc/rc.d/geli restart and zpool clear tank, and it is back on= line, > but shows permanent metadata errors=85 I'd expect the "permanent" metadata errors to be gone after the scrubbing is completed. > Any ideas why this happned from a simple zpool scrub, and how it can be p= revented? > Just disable "detach on last close" for the geli devices? At least that was Pawel's recommendation in 2007: http://lists.freebsd.org/pipermail/freebsd-current/2007-October/078107.html Fabian --Sig_/eLUGNlqK8CO8QdU4knNukxc Content-Type: application/pgp-signature; name=signature.asc Content-Disposition: attachment; filename=signature.asc -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (FreeBSD) iEYEARECAAYFAlBt8HkACgkQBYqIVf93VJ2o/gCggQ3hKU4zXUoA7D+K3HOwzqzv tBoAn01iVH146hTIljOdnlDK216bfvKm =mtka -----END PGP SIGNATURE----- --Sig_/eLUGNlqK8CO8QdU4knNukxc--