From owner-freebsd-questions@FreeBSD.ORG Mon Feb 13 15:26:04 2012 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 49829106564A for ; Mon, 13 Feb 2012 15:26:04 +0000 (UTC) (envelope-from adam@bullseye.tv) Received: from mail.bullseye.tv (mail.bullseye.tv [66.207.210.133]) by mx1.freebsd.org (Postfix) with ESMTP id F34348FC08 for ; Mon, 13 Feb 2012 15:26:03 +0000 (UTC) Received: from [192.168.4.196] (unknown [66.207.210.134]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.bullseye.tv (Postfix) with ESMTP id 940AB2B80EA for ; Mon, 13 Feb 2012 10:03:02 -0500 (EST) Message-ID: <4F392742.1060000@bullseye.tv> Date: Mon, 13 Feb 2012 10:07:46 -0500 From: Adam Coates User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:9.0) Gecko/20111222 Thunderbird/9.0.1 MIME-Version: 1.0 To: freebsd-questions@freebsd.org Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: vdevs in zpool spereated, unable to import X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 13 Feb 2012 15:26:04 -0000 I have a problem that could either be easily solved, or potentially coul dhave me royally screwed. I had a FreeBSD 8.0 system crash on me, and I lost some binaries including zfs tools. I tried fixing with Fixit but had no such luck so I rebuilt world and kernel on a fresh hard drive. The old system had zpool raidz containing da0 and da1. Because of the crash I didn't get to export this pool, but when I try to import now I get this: # zpool import tank cannot import 'tank': one or more devices is currently unavailable If I take I look at the list I only see this: # zpool import pool: tank id: 4433502968625883981 state: UNAVAIL status: One or more devices contains corrupted data. action: The pool cannot be imported due to damaged devices or data. see: http://www.sun.com/msg/ZFS-8000-5E config: tank UNAVAIL insufficient replicas da1 ONLINE if I list destroyed pools: # zpool import -D pool: tank id: 12367720188787195607 state: ONLINE (DESTROYED) action: The pool can be imported using its name or numeric identifier. config: tank ONLINE da0 ONLINE TADA! There's the missing drive. So what happened? if I debug each drive... bad drive: # zdb -l /dev/da0 -------------------------------------------- LABEL 0 -------------------------------------------- version=13 name='tank' state=2 txg=50 pool_guid=12367720188787195607 hostid=2180312168 hostname='proj.bullseye.tv' top_guid=6830294387039432583 guid=6830294387039432583 vdev_tree type='disk' id=0 guid=6830294387039432583 path='/dev/da0' whole_disk=0 metaslab_array=23 metaslab_shift=36 ashift=9 asize=6998387326976 is_log=0 -------------------------------------------- LABEL 1 -------------------------------------------- version=13 name='tank' state=2 txg=50 pool_guid=12367720188787195607 hostid=2180312168 hostname='proj.bullseye.tv' top_guid=6830294387039432583 guid=6830294387039432583 vdev_tree type='disk' id=0 guid=6830294387039432583 path='/dev/da0' whole_disk=0 metaslab_array=23 metaslab_shift=36 ashift=9 asize=6998387326976 is_log=0 -------------------------------------------- LABEL 2 -------------------------------------------- version=13 name='tank' state=2 txg=50 pool_guid=12367720188787195607 hostid=2180312168 hostname='proj.bullseye.tv' top_guid=6830294387039432583 guid=6830294387039432583 vdev_tree type='disk' id=0 guid=6830294387039432583 path='/dev/da0' whole_disk=0 metaslab_array=23 metaslab_shift=36 ashift=9 asize=6998387326976 is_log=0 -------------------------------------------- LABEL 3 -------------------------------------------- version=13 name='tank' state=2 txg=50 pool_guid=12367720188787195607 hostid=2180312168 hostname='proj.bullseye.tv' top_guid=6830294387039432583 guid=6830294387039432583 vdev_tree type='disk' id=0 guid=6830294387039432583 path='/dev/da0' whole_disk=0 metaslab_array=23 metaslab_shift=36 ashift=9 asize=6998387326976 is_log=0 and good drive: # zdb -l /dev/da1 -------------------------------------------- LABEL 0 -------------------------------------------- version=13 name='tank' state=0 txg=4 pool_guid=4433502968625883981 hostid=2180312168 hostname='zproj.bullseye.tv' top_guid=11718615808151907516 guid=11718615808151907516 vdev_tree type='disk' id=1 guid=11718615808151907516 path='/dev/da1' whole_disk=0 metaslab_array=23 metaslab_shift=36 ashift=9 asize=7001602260992 is_log=0 -------------------------------------------- LABEL 1 -------------------------------------------- version=13 name='tank' state=0 txg=4 pool_guid=4433502968625883981 hostid=2180312168 hostname='zproj.bullseye.tv' top_guid=11718615808151907516 guid=11718615808151907516 vdev_tree type='disk' id=1 guid=11718615808151907516 path='/dev/da1' whole_disk=0 metaslab_array=23 metaslab_shift=36 ashift=9 asize=7001602260992 is_log=0 -------------------------------------------- LABEL 2 -------------------------------------------- version=13 name='tank' state=0 txg=4 pool_guid=4433502968625883981 hostid=2180312168 hostname='zproj.bullseye.tv' top_guid=11718615808151907516 guid=11718615808151907516 vdev_tree type='disk' id=1 guid=11718615808151907516 path='/dev/da1' whole_disk=0 metaslab_array=23 metaslab_shift=36 ashift=9 asize=7001602260992 is_log=0 -------------------------------------------- LABEL 2 -------------------------------------------- version=13 name='tank' state=0 txg=4 pool_guid=4433502968625883981 hostid=2180312168 hostname='zproj.bullseye.tv' top_guid=11718615808151907516 guid=11718615808151907516 vdev_tree type='disk' id=1 guid=11718615808151907516 path='/dev/da1' whole_disk=0 metaslab_array=23 metaslab_shift=36 ashift=9 asize=7001602260992 is_log=0 -------------------------------------------- LABEL 3 -------------------------------------------- version=13 name='tank' state=0 txg=4 pool_guid=4433502968625883981 hostid=2180312168 hostname='zproj.bullseye.tv' top_guid=11718615808151907516 guid=11718615808151907516 vdev_tree type='disk' id=1 guid=11718615808151907516 path='/dev/da1' whole_disk=0 metaslab_array=23 metaslab_shift=36 ashift=9 asize=7001602260992 is_log=0 One thing that stands out to me is that they have different hostnames. da0 has the box's current hostname "proj", while da1 has the previous box's hostname "zproj"... could this be the issue? If so, how to I change it in this state? But wait, a look at dmesg reveals: da0: 6674186MB (13668734464 512 byte sectors: 255H 63S/T 850839C) GEOM: da0: corrupt or invalid GPT detected. GEOM: da0: GPT rejected -- may not be recoverable. Could the drive just be completely effed? If so, then I think I am too. Please help!