From owner-freebsd-fs@FreeBSD.ORG Fri Feb 6 12:24:31 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6F6AF6A8 for ; Fri, 6 Feb 2015 12:24:31 +0000 (UTC) Received: from server.linsystem.net (server.linsystem.net [80.79.23.169]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 03295A15 for ; Fri, 6 Feb 2015 12:24:30 +0000 (UTC) Received: from [80.92.253.14] (helo=localhost) by server.linsystem.net with esmtpa (Exim 4.72) (envelope-from ) id 1YJhyc-0006Yo-5I; Fri, 06 Feb 2015 13:25:38 +0100 Date: Fri, 6 Feb 2015 13:25:38 +0100 From: Robert David To: Michelle Sullivan Subject: Re: ZFS pool faulted (corrupt metadata) but the disk data appears ok... Message-ID: <20150206132538.24993e60@linsystem.net> In-Reply-To: <54D4A3A0.2040408@sorbs.net> References: <54D3E9F6.20702@sorbs.net> <54D41608.50306@delphij.net> <54D41AAA.6070303@sorbs.net> <54D41C52.1020003@delphij.net> <54D424F0.9080301@sorbs.net> <54D457F0.8080502@delphij.net> <54D4A3A0.2040408@sorbs.net> X-Mailer: Claws Mail 3.11.1 (GTK+ 2.24.25; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Cc: "freebsd-fs@freebsd.org" , d@delphij.net X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Feb 2015 12:24:31 -0000 I suggest booting to 10.1 livecd. Than check the partitions if they were created prior zfs: $ gpart show mfid0 And than try to import pool as suggested. Robert. On Fri, 06 Feb 2015 12:21:04 +0100 Michelle Sullivan wrote: > Xin Li wrote: > > -----BEGIN PGP SIGNED MESSAGE----- > > Hash: SHA512 > > > > > > > > On 2/5/15 18:20, Michelle Sullivan wrote: > > > >> Xin Li wrote: On 02/05/15 17:36, Michelle Sullivan wrote: > >> > >> > >>>>>> This suggests the pool was connected to a different system, > >>>>>> is that the case? > >>>>>> > >>>>>> > >>>>>> > >>>>> No. > >>>>> > >>>>> > >> Ok, that's good. Actually if you have two heads that writes to > >> the same pool at the same time, it can easily enter an > >> unrecoverable state. > >> > >> > >> > >>>>>> It's hard to tell right now, and we shall try all possible > >>>>>> remedies but be prepared for the worst. > >>>>>> > >>>>>> > >>>>> I am :( > >>>>> > >>>>> > >> The next thing I would try is to: > >> > >> 1. move /boot/zfs/zpool.cache to somewhere else; > >> > >> > >> > >>> There isn't one. However 'cat'ing the inode I can see there was > >>> one... > >>> > >>> <83>^LR^@^L^@^D^A.^@^@^@<80>^LR^@^A^D^B..^@^@<89>^LR^@^X^@^H^Ozpool.cache.tmp^@<89>^LR^@^A^H^Kzpool.cache^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ > >>> > >>> > >>> > > ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ > > > >> 2. zpool import -f -n -F -X storage and see if the system would > >> give you a proposal. > >> > >> > >> > >>> This crashes (without -n) the machine out of memory.... there's > >>> 32G of RAM. /boot/loader.conf contains: > >>> > >>> vfs.zfs.prefetch_disable=1 #vfs.zfs.arc_min="8G" > >>> #vfs.zfs.arc_max="16G" #vm.kmem_size_max="8" #vm.kmem_size="6G" > >>> vfs.zfs.txg.timeout="5" kern.maxvnodes=250000 > >>> vfs.zfs.write_limit_override=1073741824 vboxdrv_load="YES" > >>> > > > > Which release this is? write_limit_override have been removed quite a > > while ago. > > > > FreeBSD colossus 9.2-RELEASE-p15 FreeBSD 9.2-RELEASE-p15 #0: Mon Nov 3 > 20:31:29 UTC 2014 > root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64 > > > > I'd recommend using a fresh -CURRENT snapshot if possible (possibly > > with -NODEBUG kernel). > > > > I'm sorta afraid to try and upgrade it at this point. > > Michelle >