From owner-freebsd-stable@FreeBSD.ORG Tue Feb 9 14:27:00 2010 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A105A1065670 for ; Tue, 9 Feb 2010 14:27:00 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta01.emeryville.ca.mail.comcast.net (qmta01.emeryville.ca.mail.comcast.net [76.96.30.16]) by mx1.freebsd.org (Postfix) with ESMTP id 88ED28FC12 for ; Tue, 9 Feb 2010 14:27:00 +0000 (UTC) Received: from omta08.emeryville.ca.mail.comcast.net ([76.96.30.12]) by qmta01.emeryville.ca.mail.comcast.net with comcast id fpzQ1d00C0FhH24A1qT0na; Tue, 09 Feb 2010 14:27:00 +0000 Received: from koitsu.dyndns.org ([98.248.46.159]) by omta08.emeryville.ca.mail.comcast.net with comcast id fqSz1d00B3S48mS8UqSzLW; Tue, 09 Feb 2010 14:27:00 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 4C2161E3033; Tue, 9 Feb 2010 06:26:58 -0800 (PST) Date: Tue, 9 Feb 2010 06:26:58 -0800 From: Jeremy Chadwick To: freebsd-stable@freebsd.org Message-ID: <20100209142658.GA38072@icarus.home.lan> References: <20100209150606.ddba52dc.gerrit@pmp.uni-hannover.de> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20100209150606.ddba52dc.gerrit@pmp.uni-hannover.de> User-Agent: Mutt/1.5.20 (2009-06-14) Subject: Re: zpool vdev vs. glabel X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 09 Feb 2010 14:27:00 -0000 On Tue, Feb 09, 2010 at 03:06:06PM +0100, Gerrit Kühn wrote: > Hi, > > I have created a raidz2 with disk I labeled with glabel before. Right > after creation this pool looked fine, using devices label/tank[1-6]. > > I did some tests with replacing/swapping disks and so on. After doing a > > zpool offline tank label/tank6 > remove disk > camcontrol rescan all > insert disk > camcontrol rescan all > zpool online tank label/tank6 > > I got the disk back, but not under the requested label, but under the da > device name: > > pool: tank > state: ONLINE > scrub: resilver completed after 0h0m with 0 errors on Tue Feb 9 14:56:37 > 2010 config: > > NAME STATE READ WRITE CKSUM > tank ONLINE 0 0 0 > raidz2 ONLINE 0 0 0 > label/tank1 ONLINE 0 0 0 8.50K resilvered > label/tank2 ONLINE 0 0 0 7.50K resilvered > label/tank3 ONLINE 0 0 0 8.50K resilvered > label/tank4 ONLINE 0 0 0 7.50K resilvered > label/tank5 ONLINE 0 0 0 9K resilvered > da6 ONLINE 0 0 0 13.5K resilvered > > errors: No known data errors > > > > Why does this happen? Is there any way to get zfs to use the label again? > After the device is in use, the label in /dev/label disappears. When > taking the device offline again, the label is there, but cannot be used: > > pigpen# zpool offline tank da6 > pigpen# zpool status > pool: system > state: ONLINE > status: One or more devices has experienced an unrecoverable error. An > attempt was made to correct the error. Applications are > unaffected. action: Determine if the device needs to be replaced, and > clear the errors using 'zpool clear' or replace the device with 'zpool > replace'. see: http://www.sun.com/msg/ZFS-8000-9P > scrub: resilver completed after 0h0m with 0 errors on Tue Feb 9 14:49:14 > 2010 config: > > NAME STATE READ WRITE CKSUM > system ONLINE 0 0 0 > mirror ONLINE 0 0 0 > label/system1 ONLINE 3 617 0 126K resilvered > label/system2 ONLINE 0 0 0 41K resilvered > > errors: No known data errors > > pool: tank > state: DEGRADED > status: One or more devices has experienced an unrecoverable error. An > attempt was made to correct the error. Applications are > unaffected. action: Determine if the device needs to be replaced, and > clear the errors using 'zpool clear' or replace the device with 'zpool > replace'. see: http://www.sun.com/msg/ZFS-8000-9P > scrub: resilver completed after 0h0m with 0 errors on Tue Feb 9 14:56:37 > 2010 config: > > NAME STATE READ WRITE CKSUM > tank DEGRADED 0 0 0 > raidz2 DEGRADED 0 0 0 > label/tank1 ONLINE 0 0 0 8.50K resilvered > label/tank2 ONLINE 0 0 0 7.50K resilvered > label/tank3 ONLINE 0 0 0 8.50K resilvered > label/tank4 ONLINE 0 0 0 7.50K resilvered > label/tank5 ONLINE 0 0 0 9K resilvered > da6 OFFLINE 0 38 0 13.5K resilvered > > errors: No known data errors > pigpen# ll /dev/label/ > total 0 > crw-r----- 1 root operator 0, 104 Feb 9 14:04 lisacrypt1 > crw-r----- 1 root operator 0, 112 Feb 9 14:04 lisacrypt2 > crw-r----- 1 root operator 0, 113 Feb 9 14:04 lisacrypt3 > crw-r----- 1 root operator 0, 134 Feb 9 14:48 system1 > crw-r----- 1 root operator 0, 115 Feb 9 14:04 system2 > crw-r----- 1 root operator 0, 116 Feb 9 14:04 tank1 > crw-r----- 1 root operator 0, 117 Feb 9 14:04 tank2 > crw-r----- 1 root operator 0, 118 Feb 9 14:04 tank3 > crw-r----- 1 root operator 0, 101 Feb 9 14:04 tank4 > crw-r----- 1 root operator 0, 102 Feb 9 14:04 tank5 > crw-r----- 1 root operator 0, 103 Feb 9 15:02 tank6 > > pigpen# zpool online tank label/tank6 > cannot online label/tank6: no such device in pool > > In a different thread I found the hint to use zpool replace to get to the > usage of labels, but this seems not possible, either: > > pigpen# zpool replace tank label/tank6 > invalid vdev specification > use '-f' to override the following errors: > /dev/label/tank6 is part of active pool 'tank' > > pigpen# zpool replace -f tank label/tank6 > invalid vdev specification > the following errors must be manually repaired: > /dev/label/tank6 is part of active pool 'tank' > > pigpen# zpool replace -f tank da6 label/tank6 > invalid vdev specification > the following errors must be manually repaired: > /dev/label/tank6 is part of active pool 'tank' > > > I'm running out of ideas here... Would "zpool export" and "zpool import" be necessary in this case? Also, I'm a little confused as to the use of glabel in this case. In what condition do your disk indices (e.g. X of daX) change? Are you yanking multiple disks out of a system at the same time and then shoving them back into different drive bays? Are you switching between storage subsystem drivers (ahci(4) vs. ataahci(4), for example) regularly? I've yet to be convinced glabel is worth bothering with, unless the system adheres to one of the above situations (which are worthy of strangulation anyway ;-) ). -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP: 4BD6C0CB |