Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 20 Feb 2021 13:36:48 -0500
From:      Allan Jude <allanjude@freebsd.org>
To:        freebsd-fs@freebsd.org
Subject:   Re: Reading a corrupted file on ZFS
Message-ID:  <f138fbdc-9838-e58b-adda-ca3e5eca34be@freebsd.org>
In-Reply-To: <CAOtMX2gktKYJeDeLm6zR=1inuH-hvahXmc7VR1mO25V7g8t48Q@mail.gmail.com>
References:  <da892eeb-233f-551f-2faa-62f42c3c1d5b@artem.ru> <0ca45adf-8f60-a4c3-6264-6122444a3ffd@denninger.net> <899c6b4f-2368-7ec2-4dfe-fa09fab35447@artem.ru> <20210212165216.2f613482@fabiankeil.de> <10977ffc-f806-69dd-0cef-d4fd4fc5f649@artem.ru> <2f82f113-9ca1-99a9-a433-89e3ae5edcbe@denninger.net> <2bf4f69c-9d5d-5ff9-0daa-c87515437ca3@artem.ru> <CAOtMX2gktKYJeDeLm6zR=1inuH-hvahXmc7VR1mO25V7g8t48Q@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On 2021-02-12 13:51, Alan Somers wrote:
> On Fri, Feb 12, 2021 at 11:26 AM Artem Kuchin <artem@artem.ru> wrote:
> 
>> 12.02.2021 19:37, Karl Denninger пишет:
>>> On 2/12/2021 11:22, Artem Kuchin wrote:
>>>>
>>>> This is frustrating. why..why..
>>>
>>> You created a synthetic situation that in the real world almost-never
>>> exists (ONE byte modified in all copies in the same allocation block
>>> but all other data in that block is intact and recoverable.)
>>>
>> I could be 1 GB file with ZFS wisth block size of 1MB and with rotten
>> bits within the same 1MB of block on different disks. How i did it is
>> not important, life is unpredictable, i'm not trying to avoid
>> everything. The question is what to do when it happens. And currently
>> the answer is - nothing.
>>
>>
>>> In almost-all actual cases of "bit rot" it's exactly that; random and
>>> by statistics extraordinarily unlikely to hit all copies at once in
>>> the same allocation block.  Therefore, ZFS can and does fix it; UFS or
>>> FAT silently returns the corrupted data, propagates it, and eventually
>>> screws you down the road.
>>
>> In active fs you are right. But if this is a storage disk with movies
>> and photos, then i can just checksum all files with a little script and
>> recheck once in a while. So, for storage
>>
>> perposes i have all ZFS postitives and also can read as much data as i
>> can. Because for long time storage it is more important to have ability
>> read the data in any case.
>>
>>
>>>
>>> The nearly-every-case situation in the real world where a disk goes
>>> physically bad (I've had this happen *dozens* of times over my IT
>>> career) results in the drive being unable to
>>
>>
>> *NEARLY* is not good enough for me.
>>
>>
>>> return the block at all;
>>
>>
>> You mix device blocks and ZFS block. As far as i remember default ZFS
>> block for checksumming is 16K and for big files storage better to have
>> it around 128K.
>>
>>
>>> In short there are very, very few actual "in the wild" failures where
>>> one byte is damaged and the rest surrounding that one byte is intact
>>> and retrievable.  In most cases where an actual failure occurs the
>>> unreadable data constitutes *at least* a physical sector.
>>>
>> "very very few" is enough for me to think about.
>>
>> One more thing. If you have one bad byte in a block of 16K and you have
>> checksum and recalculate it then it is quite possible to just brute
>> force every byte to match the checksum, thus restoring the data.
>>
>> If you have mirror with two different bytes then bute forcing is even
>> ether,
>>
>> Somehow, ZFS slaps my hands and does not allow to be sure that i can
>> restore data when i needed it and decide myself if it is okay or not.
>>
>> For long time storage of big files it now seems better to store it on
>> UFS mirror, checksum each 512bytes blocks of files and store checksums
>> separetelly and run monthly/weekly "scrub". This way i would sleep better.
>>
> 
> GOD NO.   ZFS is really quite good at preserving your data integrity.  For
> example, with your suggested scheme what would protect you from a corrupted
> checksum file?  Nothing.  In ZFS, the Merkle hash tree would detect such a
> thing.  Karl is correct: the type of corruption you're worried about is
> almost non-existent in the real world.  Why?  LDPC coding, for one reason.
> For the last 10+ years, hard disks have encoded data using LDPC.  Older
> hard disk encoding schemes, like Reed-Solomon encoding, stored the data in
> a format similar to RAID: as data + parity.  That's why older ATA standards
> had a "READ LONG" command.  But with LDPC, the "original" data does not
> exist anywhere on the platter.  It gets transformed into a large codeword
> with data and parity intermingled.  Physical damage will either be
> correctable (most likely), render the entire codeword illegible (less
> likely), or cause it to decode into completely wrong data (least likely).
> There simply isn't any way to randomly flip a single bit, once it's been
> written to the media.
> 
> But if you really, really REALLY want to read blocks that have been
> deliberately corrupted, you can do it laboriously with zdb.  Use zdb to
> show the dnode, which will include the record pointers for each block.  You
> can decode those and extra the data from the disks with dd.  The exact
> procedure is left as an exercise to the reader.
> 
> -Alan
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"
> 

Specifically, my hackathon project at the 2020 OpenZFS developer summit
was to make this less laborious.

https://github.com/openzfs/zfs/commit/393e69241eea8b5f7f817200ad283b7d5b5ceb70

It allows you to use zdb to copy the file out, even if the pool will not
import. You might need to modify it slightly to do what you want in the
case of an error (fill the 1 record with zeros, or return the trashed data).

-- 
Allan Jude



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?f138fbdc-9838-e58b-adda-ca3e5eca34be>