From owner-freebsd-fs@FreeBSD.ORG Thu Jul 30 07:34:36 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E15671065673 for ; Thu, 30 Jul 2009 07:34:36 +0000 (UTC) (envelope-from ludwigp@chip-web.com) Received: from toy2.chip-web.com (adsl-63-195-43-50.dsl.snfc21.pacbell.net [63.195.43.50]) by mx1.freebsd.org (Postfix) with SMTP id 8DA0E8FC1E for ; Thu, 30 Jul 2009 07:34:36 +0000 (UTC) (envelope-from ludwigp@chip-web.com) Received: (qmail 99641 invoked from network); 30 Jul 2009 07:24:27 -0000 Received: from localhost.chip-web.com (HELO ?127.0.0.1?) (ludwigp@127.0.0.1) by localhost.chip-web.com with SMTP; 30 Jul 2009 07:24:27 -0000 Message-ID: <4A714B03.6050704@chip-web.com> Date: Thu, 30 Jul 2009 00:25:55 -0700 From: Ludwig Pummer User-Agent: Thunderbird 2.0.0.22 (Windows/20090605) MIME-Version: 1.0 To: Simun Mikecin References: <4A712290.9030308@chip-web.com> <46899.11156.qm@web37301.mail.mud.yahoo.com> In-Reply-To: <46899.11156.qm@web37301.mail.mud.yahoo.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: ZFS raidz1 pool unavailable from losing 1 device X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 30 Jul 2009 07:34:37 -0000 Simun Mikecin wrote: > Ludwin Pummer wrote: > > >> My system is 7.2-STABLE Jul 27, amd64, 4GB memory, just upgraded from 6.4-STABLE >> from last year. I just set up a ZFS raidz volume to replace a graid5 volume I >> had been using. I had it successfully set up using partitions across 4 disks, >> ad{6,8,10,12}s1e. Then I wanted to expand the raidz volume by merging the space >> from the adjacent disk partition. I thought I could just fail out the partition >> device in ZFS, edit the bsdlabel, and re-add the larger partition, ZFS would >> resilver, repeat until done. That's when I found out that ZFS doesn't let you >> fail out a device in a raidz volume. No big deal, I thought, I'll just go to >> single user mode and mess with the partition when ZFS isn't looking. When it >> comes back up it should notice that one of the device is gone, I can do a 'zfs >> replace' and continue my plan. >> >> Well, after rebooting to single user mode, combining partitions ad12s1d and >> ad12s1e (removed the d partiton), "zfs volinit", then "zpool status" just hung >> (Ctrl-C didn't kill it, so I rebooted). I thought this was a bit odd so I >> thought perhaps ZFS is confused by the ZFS metadata left on ad12s1e, so I >> blanked it out with "dd". That didn't help. I changed the name of the partition >> to ad12s1d thinking perhaps that would help. After that, "zfs volinit; zfs mount >> -a; zpool status" showed my raidz pool UNAVAIL with the message "insufficient >> replicas", ad{6,8,10}s1e ONLINE, and ad12s1e UNAVAIL "cannot open", and a more >> detailed message pointing me to http://www.sun.com/msg/ZFS-8000-3C. I tried >> doing a "zpool replace storage ad12s1e ad12s1d" but it refused, saying my zpool >> ("storage") was unavailable. Ditto for pretty much every zpool command I tried. >> "zpool clear" gave me a "permission denied" error. >> > > Was your pool imported while you were repartitioning in single user mode? > > Yes, I guess you could say it was. ZFS wasn't loaded while I was doing the repartitioning, though. --Ludwig