From owner-freebsd-fs@FreeBSD.ORG Thu Apr 18 05:15:18 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 4F47B93B for ; Thu, 18 Apr 2013 05:15:18 +0000 (UTC) (envelope-from zaphod@berentweb.com) Received: from sam.nabble.com (sam.nabble.com [216.139.236.26]) by mx1.freebsd.org (Postfix) with ESMTP id 33924B0A for ; Thu, 18 Apr 2013 05:15:17 +0000 (UTC) Received: from [192.168.236.26] (helo=sam.nabble.com) by sam.nabble.com with esmtp (Exim 4.72) (envelope-from ) id 1UShBl-0008UI-60 for freebsd-fs@freebsd.org; Wed, 17 Apr 2013 22:15:17 -0700 Date: Wed, 17 Apr 2013 22:15:17 -0700 (PDT) From: Beeblebrox To: freebsd-fs@freebsd.org Message-ID: <1366262117117-5804714.post@n5.nabble.com> In-Reply-To: References: <1366221907838-5804517.post@n5.nabble.com> <1366226180639-5804603.post@n5.nabble.com> Subject: [ZFS] recover destroyed zpool with ZDB MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 18 Apr 2013 05:15:18 -0000 Thanks, but that document does not appear very relevant to my situation. Also, the issue is not as straight-forward as it seems. The DEFAULTED status of the zpool was a 'false positive', because A- The "present pool" did not accept any zpool commands and always gave message like no such pool or dataset ... recover the pool from a backup source. B- The more relevant on-disk metadata showed and still shows this: # zdb -l /dev/ada0p2 => all 4 labels intact and pool_guid: 12018916494219117471 vdev_tree: type: 'disk' id: 0 guid: 17860002997423999070 While the pool showing up in the zpool list was/is clearly in a worse state that the above pool: # zdb -l /dev/ada0 => only label 2 intact and pool_guid: 16018525702691588432 In my opinion, this problem is more similar to a "Resolving a Missing Device" problem rather than data corruption. Unfortunately, missing device repairs focus on mirrored setups and no decent document on missing device of single-HDD pool. ----- 10-Current-amd64-using ccache-portstree merged with marcuscom.gnome3 & xorg.devel -- View this message in context: http://freebsd.1045724.n5.nabble.com/ZFS-recover-destroyed-zpool-with-ZDB-tp5804517p5804714.html Sent from the freebsd-fs mailing list archive at Nabble.com.