From owner-freebsd-fs@FreeBSD.ORG Mon Feb 9 15:14:38 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 962C6EA2; Mon, 9 Feb 2015 15:14:38 +0000 (UTC) Received: from mailout10.t-online.de (mailout10.t-online.de [194.25.134.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mailout00.t-online.de", Issuer "TeleSec ServerPass DE-1" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 42170EB7; Mon, 9 Feb 2015 15:14:38 +0000 (UTC) Received: from fwd07.aul.t-online.de (fwd07.aul.t-online.de [172.20.27.150]) by mailout10.t-online.de (Postfix) with SMTP id 2F191264C82; Mon, 9 Feb 2015 16:14:29 +0100 (CET) Received: from [192.168.119.11] (TznW-2ZF8h3y1c7+GZuaA0V6Qk09SglEvswuXE9ZI-yjlAd79ICc0UI-ZPS+8XUQDO@[84.154.99.55]) by fwd07.t-online.de with (TLSv1.2:ECDHE-RSA-AES256-SHA encrypted) esmtp id 1YKq2c-2IJ5F20; Mon, 9 Feb 2015 16:14:26 +0100 Message-ID: <54D8CECE.60909@freebsd.org> Date: Mon, 09 Feb 2015 16:14:22 +0100 From: Stefan Esser User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101 Thunderbird/31.4.0 MIME-Version: 1.0 To: Michelle Sullivan , "freebsd-fs@freebsd.org" Subject: Re: ZFS pool faulted (corrupt metadata) but the disk data appears ok... References: <54D3E9F6.20702@sorbs.net> <54D41608.50306@delphij.net> <54D41AAA.6070303@sorbs.net> <54D41C52.1020003@delphij.net> <54D424F0.9080301@sorbs.net> <54D47F94.9020404@freebsd.org> <54D4A552.7050502@sorbs.net> <54D4BB5A.30409@freebsd.org> <54D8B3D8.6000804@sorbs.net> In-Reply-To: <54D8B3D8.6000804@sorbs.net> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit X-ID: TznW-2ZF8h3y1c7+GZuaA0V6Qk09SglEvswuXE9ZI-yjlAd79ICc0UI-ZPS+8XUQDO X-TOI-MSGID: 51a66239-16c2-40fa-894c-37b37e2cc896 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Feb 2015 15:14:38 -0000 Am 09.02.2015 um 14:19 schrieb Michelle Sullivan: > Stefan Esser wrote: >> >> The point were zdb seg faults hints at the data structure that is >> corrupt. You may get some output before the seg fault, if you add >> a number of -v options (they add up to higher verbosity). >> >> Else, you may be able to look at the core and identify the function >> that fails. You'll most probably need zdb and libzfs compiled with >> "-g" to get any useful information from the core, though. >> >> For my failed pool, I noticed that internal assumptions were >> violated, due to some free space occuring in more than one entry. >> I had to special case the test in some function to ignore this >> situation (I knew that I'd only ever wanted to mount that pool >> R/O to rescue my data). But skipping the test did not suffice, >> since another assert triggered (after skipping the NULL dereference, >> the calculated sum of free space did not match the recorded sum, I >> had to disable that assert, too). With these two patches I was able >> to recover the pool starting at a TXG less than 100 transactions back, >> which was sufficient for my purpose ... >> > > Question is will zdb 'fix' things or is it just a debug utility (for > displaying)? The purpose of zdf is to access the pool without the need to import it (which tends to crash the kernel) and to possibly identify a safe TXG to go back to. Once you have found that zdb survives accesses to critical data structures of your pool, you can then try to import the pool to rescue your data. > If it is just a debug and won't fix anything, I'm quite happy to roll > back transactions, question is how (presumably after one finds the > corrupt point - I'm quite happy to just do it by hand until I get > success - it will save 2+months of work - I did get an output with a > date/time that indicates where the rollback would go to...) > > In the mean time this appears to be working without crashing - it's been > running days now... > > PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU > COMMAND > 4332 root 209 22 0 23770M 23277M uwait 1 549:07 11.04% > zdb -AAA -L -uhdi -FX -e storage Options -u and -h do not take much time, -i depends on how much was in the intent log (and recovery should be possible without, if your kernel is not too old with regard to supported ZFS features). zdb -d takes a long time, and if it succeeds, you should be able to recover your data. But zdb -m should also run to completion (and ISTR, in my case that was where my kernel blew up trying to import the pool). Using the debugger to analyze the failed instruction let me work around the inconsistency with two small patches (one skipped a consistency check, the second fixed up the sum of free space which was miscalculated due to the free block that lead to the panic being omitted). After I had these patches tested with zdb, I was able to import the pool into a kernel that included these exact patches. You obviously do not want to perform any other activities with the patched kernel, since it lacks some internal checks - it is purely required for the one time backup operation of the failed pool. So, zdb and even the patches that make zdb dump your pool's internal state will not directly lead to access to your data. But if you manage to print all state with "zdb -dm", chances are very good, that you'll be able to import the pool - possibly with temporary hacks to libzfs that skip corrupt data elements (if not strictly required for read accesses to your data). After that succeeded, you have a good chance to copy off your data using a kernel that has the exact same patches in the ZFS driver ... (if any are required, as in my case). Regards, STefan