From owner-freebsd-stable@FreeBSD.ORG Sun Jul 25 22:08:58 2010 Return-Path: Delivered-To: stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A5D981065672 for ; Sun, 25 Jul 2010 22:08:58 +0000 (UTC) (envelope-from dan@langille.org) Received: from nyi.unixathome.org (nyi.unixathome.org [64.147.113.42]) by mx1.freebsd.org (Postfix) with ESMTP id 7C1478FC08 for ; Sun, 25 Jul 2010 22:08:58 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by nyi.unixathome.org (Postfix) with ESMTP id B5D0C50A93 for ; Sun, 25 Jul 2010 23:08:57 +0100 (BST) X-Virus-Scanned: amavisd-new at unixathome.org Received: from nyi.unixathome.org ([127.0.0.1]) by localhost (nyi.unixathome.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 05Els49Ek2qZ for ; Sun, 25 Jul 2010 23:08:56 +0100 (BST) Received: from smtp-auth.unixathome.org (smtp-auth.unixathome.org [10.4.7.7]) (Authenticated sender: hidden) by nyi.unixathome.org (Postfix) with ESMTPSA id D5E8A50A16 for ; Sun, 25 Jul 2010 23:08:56 +0100 (BST) Message-ID: <4C4CB5EE.4030107@langille.org> Date: Sun, 25 Jul 2010 18:08:46 -0400 From: Dan Langille Organization: The FreeBSD Diary User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.2.7) Gecko/20100713 Thunderbird/3.1.1 MIME-Version: 1.0 To: stable@freebsd.org References: <4C4C7B4A.7010003@langille.org> In-Reply-To: <4C4C7B4A.7010003@langille.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: Subject: Re: zpool destroy causes panic X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 25 Jul 2010 22:08:58 -0000 On 7/25/2010 1:58 PM, Dan Langille wrote: > I'm trying to destroy a zfs array which I recently created. It contains > nothing of value. > > # zpool status > pool: storage > state: ONLINE > status: One or more devices could not be used because the label is > missing or > invalid. Sufficient replicas exist for the pool to continue > functioning in a degraded state. > action: Replace the device using 'zpool replace'. > see: http://www.sun.com/msg/ZFS-8000-4J > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > storage ONLINE 0 0 0 > raidz2 ONLINE 0 0 0 > gpt/disk01 ONLINE 0 0 0 > gpt/disk02 ONLINE 0 0 0 > gpt/disk03 ONLINE 0 0 0 > gpt/disk04 ONLINE 0 0 0 > gpt/disk05 ONLINE 0 0 0 > /tmp/sparsefile1.img UNAVAIL 0 0 0 corrupted data > /tmp/sparsefile2.img UNAVAIL 0 0 0 corrupted data > > errors: No known data errors > > Why sparse files? See this post: > > http://docs.freebsd.org/cgi/getmsg.cgi?fetch=1007077+0+archive/2010/freebsd-stable/20100725.freebsd-stable > > > The two tmp files were created via: > > dd if=/dev/zero of=/tmp/sparsefile1.img bs=1 count=0 oseek=1862g > dd if=/dev/zero of=/tmp/sparsefile2.img bs=1 count=0 oseek=1862g > > And the array created with: > > zpool create -f storage raidz2 gpt/disk01 gpt/disk02 gpt/disk03 \ > gpt/disk04 gpt/disk05 /tmp/sparsefile1.img /tmp/sparsefile2.img > > The -f flag was required to avoid this message: > > invalid vdev specification > use '-f' to override the following errors: > mismatched replication level: raidz contains both files and devices > > > I tried to offline one of the sparse files: > > zpool offline storage /tmp/sparsefile2.img > > That caused a panic: http://www.langille.org/tmp/zpool-offline-panic.jpg > > After rebooting, I rm'd both /tmp/sparsefile1.img and > /tmp/sparsefile2.img without thinking they were still in the zpool. Now > I am unable to destroy the pool. The system panics. I disabled ZFS via > /etc/rc.conf, rebooted, recreated the two sparse files, then did a > forcestart of zfs. Then I saw: > > # zpool status > pool: storage > state: ONLINE > status: One or more devices could not be used because the label is > missing or > invalid. Sufficient replicas exist for the pool to continue > functioning in a degraded state. > action: Replace the device using 'zpool replace'. > see: http://www.sun.com/msg/ZFS-8000-4J > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > storage ONLINE 0 0 0 > raidz2 ONLINE 0 0 0 > gpt/disk01 ONLINE 0 0 0 > gpt/disk02 ONLINE 0 0 0 > gpt/disk03 ONLINE 0 0 0 > gpt/disk04 ONLINE 0 0 0 > gpt/disk05 ONLINE 0 0 0 > /tmp/sparsefile1.img UNAVAIL 0 0 0 corrupted data > /tmp/sparsefile2.img UNAVAIL 0 0 0 corrupted data > > errors: No known data errors > > > Another attempt to destroy the array created a panic. > > Suggestions as to how to remove this array and get started again? I fixed this by: * reboot zfs_enable="NO" in /etc/rc.conf * rm /boot/zfs/zpool.cache * wiping the first and last 16KB of each partition involved in the array Now I'm trying mdconfig instead of sparse files. Making progress, but not all the way there yet. :) -- Dan Langille - http://langille.org/