Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 12 Feb 2021 21:26:46 +0300
From:      Artem Kuchin <artem@artem.ru>
To:        freebsd-fs@freebsd.org
Subject:   Re: Reading a corrupted file on ZFS
Message-ID:  <2bf4f69c-9d5d-5ff9-0daa-c87515437ca3@artem.ru>
In-Reply-To: <2f82f113-9ca1-99a9-a433-89e3ae5edcbe@denninger.net>
References:  <da892eeb-233f-551f-2faa-62f42c3c1d5b@artem.ru> <0ca45adf-8f60-a4c3-6264-6122444a3ffd@denninger.net> <899c6b4f-2368-7ec2-4dfe-fa09fab35447@artem.ru> <20210212165216.2f613482@fabiankeil.de> <10977ffc-f806-69dd-0cef-d4fd4fc5f649@artem.ru> <2f82f113-9ca1-99a9-a433-89e3ae5edcbe@denninger.net>

next in thread | previous in thread | raw e-mail | index | archive | help
12.02.2021 19:37, Karl Denninger пишет:
> On 2/12/2021 11:22, Artem Kuchin wrote:
>>
>> This is frustrating. why..why..
>
> You created a synthetic situation that in the real world almost-never 
> exists (ONE byte modified in all copies in the same allocation block 
> but all other data in that block is intact and recoverable.)
>
I could be 1 GB file with ZFS wisth block size of 1MB and with rotten 
bits within the same 1MB of block on different disks. How i did it is 
not important, life is unpredictable, i'm not trying to avoid 
everything. The question is what to do when it happens. And currently 
the answer is - nothing.


> In almost-all actual cases of "bit rot" it's exactly that; random and 
> by statistics extraordinarily unlikely to hit all copies at once in 
> the same allocation block.  Therefore, ZFS can and does fix it; UFS or 
> FAT silently returns the corrupted data, propagates it, and eventually 
> screws you down the road.

In active fs you are right. But if this is a storage disk with movies 
and photos, then i can just checksum all files with a little script and 
recheck once in a while. So, for storage

perposes i have all ZFS postitives and also can read as much data as i 
can. Because for long time storage it is more important to have ability 
read the data in any case.


>
> The nearly-every-case situation in the real world where a disk goes 
> physically bad (I've had this happen *dozens* of times over my IT 
> career) results in the drive being unable to 


*NEARLY* is not good enough for me.


> return the block at all; 


You mix device blocks and ZFS block. As far as i remember default ZFS 
block for checksumming is 16K and for big files storage better to have 
it around 128K.


> In short there are very, very few actual "in the wild" failures where 
> one byte is damaged and the rest surrounding that one byte is intact 
> and retrievable.  In most cases where an actual failure occurs the 
> unreadable data constitutes *at least* a physical sector.
>
"very very few" is enough for me to think about.

One more thing. If you have one bad byte in a block of 16K and you have 
checksum and recalculate it then it is quite possible to just brute 
force every byte to match the checksum, thus restoring the data.

If you have mirror with two different bytes then bute forcing is even ether,

Somehow, ZFS slaps my hands and does not allow to be sure that i can 
restore data when i needed it and decide myself if it is okay or not.

For long time storage of big files it now seems better to store it on 
UFS mirror, checksum each 512bytes blocks of files and store checksums 
separetelly and run monthly/weekly "scrub". This way i would sleep better.


Artem








Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?2bf4f69c-9d5d-5ff9-0daa-c87515437ca3>