From owner-freebsd-fs@FreeBSD.ORG Sat Aug 1 09:11:39 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id DE17D106564A for ; Sat, 1 Aug 2009 09:11:39 +0000 (UTC) (envelope-from marius@nuenneri.ch) Received: from mail-fx0-f210.google.com (mail-fx0-f210.google.com [209.85.220.210]) by mx1.freebsd.org (Postfix) with ESMTP id 7940F8FC0A for ; Sat, 1 Aug 2009 09:11:39 +0000 (UTC) (envelope-from marius@nuenneri.ch) Received: by fxm6 with SMTP id 6so516616fxm.43 for ; Sat, 01 Aug 2009 02:11:38 -0700 (PDT) MIME-Version: 1.0 Received: by 10.102.218.6 with SMTP id q6mr1330717mug.93.1249116616740; Sat, 01 Aug 2009 01:50:16 -0700 (PDT) In-Reply-To: <4A73A096.5050106@chip-web.com> References: <4A712290.9030308@chip-web.com> <46899.11156.qm@web37301.mail.mud.yahoo.com> <4A714B03.6050704@chip-web.com> <4A73A096.5050106@chip-web.com> Date: Sat, 1 Aug 2009 10:50:16 +0200 Message-ID: From: =?ISO-8859-1?Q?Marius_N=FCnnerich?= To: Ludwig Pummer Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: ZFS raidz1 pool unavailable from losing 1 device X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 01 Aug 2009 09:11:40 -0000 On Sat, Aug 1, 2009 at 03:55, Ludwig Pummer wrote: > Ludwig Pummer wrote: >> >> Simun Mikecin wrote: >>> >>> Ludwin Pummer wrote: >>> >>> >>>> >>>> My system is 7.2-STABLE Jul 27, amd64, 4GB memory, just upgraded from >>>> 6.4-STABLE from last year. I just set up a ZFS raidz volume to replace a >>>> graid5 volume I had been using. I had it successfully set up using >>>> partitions across 4 disks, ad{6,8,10,12}s1e. Then I wanted to expand the >>>> raidz volume by merging the space from the adjacent disk partition. I >>>> thought I could just fail out the partition device in ZFS, edit the >>>> bsdlabel, and re-add the larger partition, ZFS would resilver, repeat until >>>> done. That's when I found out that ZFS doesn't let you fail out a device in >>>> a raidz volume. No big deal, I thought, I'll just go to single user mode and >>>> mess with the partition when ZFS isn't looking. When it comes back up it >>>> should notice that one of the device is gone, I can do a 'zfs replace' and >>>> continue my plan. >>>> >>>> Well, after rebooting to single user mode, combining partitions ad12s1d >>>> and ad12s1e (removed the d partiton), "zfs volinit", then "zpool status" >>>> just hung (Ctrl-C didn't kill it, so I rebooted). I thought this was a bit >>>> odd so I thought perhaps ZFS is confused by the ZFS metadata left on >>>> ad12s1e, so I blanked it out with "dd". That didn't help. I changed the name >>>> of the partition to ad12s1d thinking perhaps that would help. After that, >>>> "zfs volinit; zfs mount -a; zpool status" showed my raidz pool UNAVAIL with >>>> the message "insufficient replicas", ad{6,8,10}s1e ONLINE, and ad12s1e >>>> UNAVAIL "cannot open", and a more detailed message pointing me to >>>> http://www.sun.com/msg/ZFS-8000-3C. I tried doing a "zpool replace storage >>>> ad12s1e ad12s1d" but it refused, saying my zpool ("storage") was >>>> unavailable. Ditto for pretty much every zpool command I tried. "zpool >>>> clear" gave me a "permission denied" error. >>>> >>> >>> Was your pool imported while you were repartitioning in single user mode? >>> >> >> Yes, I guess you could say it was. ZFS wasn't loaded while I was doing the >> repartitioning, though. >> >> --Ludwig >> > > Well, I figured out my problem. I didn't actually have a raidz1 volume. I > missed the magic word "raidz" when I performed the "zpool create" so I > created a JBOD. Removing one disk legitmately destroyed my zpool :( > > --Ludwig That's bad. But it won't explain why the disk names changed. I guess there is a race in tasting either the original ad* providers or the one sector smaller label/foo providers. May I suggest that you or other people reading this should try to use gpt labels in the future as they are there definetly _after_ gpt has tasted. Sadly they are only available in 8-current right now.