Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 05 Feb 2015 21:58:08 -0800
From:      Xin Li <delphij@delphij.net>
To:        Michelle Sullivan <michelle@sorbs.net>, d@delphij.net
Cc:        "freebsd-fs@freebsd.org" <freebsd-fs@freebsd.org>
Subject:   Re: ZFS pool faulted (corrupt metadata) but the disk data appears ok...
Message-ID:  <54D457F0.8080502@delphij.net>
In-Reply-To: <54D424F0.9080301@sorbs.net>
References:  <54D3E9F6.20702@sorbs.net> <54D41608.50306@delphij.net> <54D41AAA.6070303@sorbs.net> <54D41C52.1020003@delphij.net> <54D424F0.9080301@sorbs.net>

next in thread | previous in thread | raw e-mail | index | archive | help
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512



On 2/5/15 18:20, Michelle Sullivan wrote:
> Xin Li wrote: On 02/05/15 17:36, Michelle Sullivan wrote:
> 
>>>>> This suggests the pool was connected to a different system,
>>>>> is that the case?
>>>>> 
>>>>> 
>>>> No.
>>>> 
> 
> Ok, that's good.  Actually if you have two heads that writes to
> the same pool at the same time, it can easily enter an
> unrecoverable state.
> 
> 
>>>>> It's hard to tell right now, and we shall try all possible 
>>>>> remedies but be prepared for the worst.
>>>>> 
>>>> I am :(
>>>> 
> 
> The next thing I would try is to:
> 
> 1. move /boot/zfs/zpool.cache to somewhere else;
> 
> 
>> There isn't one.  However 'cat'ing the inode I can see there was
>> one...
> 
>> <83>^LR^@^L^@^D^A.^@^@^@<80>^LR^@<F4>^A^D^B..^@^@<89>^LR^@^X^@^H^Ozpool.cache.tmp^@<89>^LR^@<D0>^A^H^Kzpool.cache^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
>>
>> 
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
> 2. zpool import -f -n -F -X storage and see if the system would
> give you a proposal.
> 
> 
>> This crashes (without -n) the machine out of memory.... there's
>> 32G of RAM. /boot/loader.conf contains:
> 
>> vfs.zfs.prefetch_disable=1 #vfs.zfs.arc_min="8G" 
>> #vfs.zfs.arc_max="16G" #vm.kmem_size_max="8" #vm.kmem_size="6G" 
>> vfs.zfs.txg.timeout="5" kern.maxvnodes=250000 
>> vfs.zfs.write_limit_override=1073741824 vboxdrv_load="YES"

Which release this is?  write_limit_override have been removed quite a
while ago.

I'd recommend using a fresh -CURRENT snapshot if possible (possibly
with -NODEBUG kernel).

Cheers,
-----BEGIN PGP SIGNATURE-----

iQIcBAEBCgAGBQJU1FfuAAoJEJW2GBstM+nsiJMP/2G7VHlNkBl+IAllWdECjcVs
oseOwsV9pZkcaj8DQP7Y295+UQbbq09m2fy9disqjH1mPpssR7sthiEIBSmjXGWT
4alWO4C6Y38XfMRFjMkvrj03vWV8caaMuipYYscVXq7N/pa/zMiqt+ECPSMnli9M
jeaOEh/vitHMddBTG3YAQ62OLXBq2T/0iqA7VyiPRxJbmVE/iiG6nC4Ve3NeUYIq
2gdZHvKUIGUqSRhfvkzqRk2vUs3SzaGPHLWok6e8j0XYHrfSC1W0kO7VMR8TZwxD
lzxnJ0tTjBTcBinNtLBggBl8s8Ps7WoWSTf1JWAi7RSIwcf/os3vt87b+LJ9eaGe
gUsU3MvDrPGIHwg+OSHkya8+IKuvxhTEdzPVEi0RfL2sKe3HjtHcllJGiAUDAYbc
IwYIELVnXglD0qc1SHvit7fjN8zDPk/fbaKIbSZVp6ilkOgbTCnwAmsiA9cCN3Ir
dwuP1n+GKPRq3ufThBZ6KGo60/5nGwa4HxTZZ1sj6Jczatb1EraytAkIzWCpmd0Z
wySVojokz5IL+F3Bp4o4/TBJGPkOEf2Wl9Zcoe3pnahofDv5+hQYpb4HPyHlN/lE
qy5ig+iSVKd7IJ2Twkz6WNaUTx1avnxO57qnXI3/dYhM7mxT8Zrzi7RQkpSFXWbX
ojz/8g/KvpvR4lJf8I+K
=Bxyv
-----END PGP SIGNATURE-----



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?54D457F0.8080502>