Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 6 Feb 2015 13:25:38 +0100
From:      Robert David <robert@linsystem.net>
To:        Michelle Sullivan <michelle@sorbs.net>
Cc:        "freebsd-fs@freebsd.org" <freebsd-fs@freebsd.org>, d@delphij.net
Subject:   Re: ZFS pool faulted (corrupt metadata) but the disk data appears ok...
Message-ID:  <20150206132538.24993e60@linsystem.net>
In-Reply-To: <54D4A3A0.2040408@sorbs.net>
References:  <54D3E9F6.20702@sorbs.net> <54D41608.50306@delphij.net> <54D41AAA.6070303@sorbs.net> <54D41C52.1020003@delphij.net> <54D424F0.9080301@sorbs.net> <54D457F0.8080502@delphij.net> <54D4A3A0.2040408@sorbs.net>

next in thread | previous in thread | raw e-mail | index | archive | help
I suggest booting to 10.1 livecd. 

Than check the partitions if they were created prior zfs:

$ gpart show mfid0

And than try to import pool as suggested.

Robert.

On Fri, 06 Feb 2015 12:21:04 +0100
Michelle Sullivan <michelle@sorbs.net> wrote:

> Xin Li wrote:
> > -----BEGIN PGP SIGNED MESSAGE-----
> > Hash: SHA512
> >
> >
> >
> > On 2/5/15 18:20, Michelle Sullivan wrote:
> >   
> >> Xin Li wrote: On 02/05/15 17:36, Michelle Sullivan wrote:
> >>
> >>     
> >>>>>> This suggests the pool was connected to a different system,
> >>>>>> is that the case?
> >>>>>>
> >>>>>>
> >>>>>>             
> >>>>> No.
> >>>>>
> >>>>>           
> >> Ok, that's good.  Actually if you have two heads that writes to
> >> the same pool at the same time, it can easily enter an
> >> unrecoverable state.
> >>
> >>
> >>     
> >>>>>> It's hard to tell right now, and we shall try all possible 
> >>>>>> remedies but be prepared for the worst.
> >>>>>>
> >>>>>>             
> >>>>> I am :(
> >>>>>
> >>>>>           
> >> The next thing I would try is to:
> >>
> >> 1. move /boot/zfs/zpool.cache to somewhere else;
> >>
> >>
> >>     
> >>> There isn't one.  However 'cat'ing the inode I can see there was
> >>> one...
> >>>       
> >>> <83>^LR^@^L^@^D^A.^@^@^@<80>^LR^@<F4>^A^D^B..^@^@<89>^LR^@^X^@^H^Ozpool.cache.tmp^@<89>^LR^@<D0>^A^H^Kzpool.cache^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
> >>>
> >>>
> >>>       
> > ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
> >   
> >> 2. zpool import -f -n -F -X storage and see if the system would
> >> give you a proposal.
> >>
> >>
> >>     
> >>> This crashes (without -n) the machine out of memory.... there's
> >>> 32G of RAM. /boot/loader.conf contains:
> >>>       
> >>> vfs.zfs.prefetch_disable=1 #vfs.zfs.arc_min="8G" 
> >>> #vfs.zfs.arc_max="16G" #vm.kmem_size_max="8" #vm.kmem_size="6G" 
> >>> vfs.zfs.txg.timeout="5" kern.maxvnodes=250000 
> >>> vfs.zfs.write_limit_override=1073741824 vboxdrv_load="YES"
> >>>       
> >
> > Which release this is?  write_limit_override have been removed quite a
> > while ago.
> >   
> 
> FreeBSD colossus 9.2-RELEASE-p15 FreeBSD 9.2-RELEASE-p15 #0: Mon Nov  3
> 20:31:29 UTC 2014    
> root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC  amd64
> 
> 
> > I'd recommend using a fresh -CURRENT snapshot if possible (possibly
> > with -NODEBUG kernel).
> >   
> 
> I'm sorta afraid to try and upgrade it at this point.
> 
> Michelle
> 




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20150206132538.24993e60>