Date: Thu, 9 Jun 2011 16:07:17 -0700 From: Artem Belevich <art@freebsd.org> To: Greg Bonett <greg@bonett.org> Cc: "stable@freebsd.org" <stable@freebsd.org> Subject: Re: recover file from destroyed zfs snapshot - is it possible? Message-ID: <BANLkTi=_phuV_V3AqFSWc1CeXn0qdyvkrQ@mail.gmail.com> In-Reply-To: <1307659424.2135.43.camel@ubuntu> References: <1307649610.2135.29.camel@ubuntu> <BANLkTinTEazsjP=hW=8OA2ECaumO5kNQkA@mail.gmail.com> <1307659424.2135.43.camel@ubuntu>
next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, Jun 9, 2011 at 3:43 PM, Greg Bonett <greg@bonett.org> wrote: > One question though, you say it's necessary that "appropriate > =A0disk blocks have not been reused by more recent transactions" > Is it not possible for me to just read all the disk blocks looking for > the filename and string it contained? How big are disk blocks, is it > possible the whole 16k file is on one or a few contiguous blocks? Whether all your data is in a single block would depend on how large the file is and how exactly it was written out. If it's something that's been written all at once, chances are that it will end up located sequentially somewhere on disk. If the file was written more than once, you may find several file variants. Telling which one is the most recent one without parsing ZFS metadata would be up to you. Another question is whether the content would be easy to identify. If you have compression turned on, then simple grepping for content will not work. So, if your pool does not have compression on, you know what was in the file and are reasonably sure that you will be able to tell whether the data you recover is consistent or not, then by all means start with searching for this content in a raw data. Default ZFS block size is 128K, so for the small file written at once there's a good chance that it's been written out in a single chunk. --Artem
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?BANLkTi=_phuV_V3AqFSWc1CeXn0qdyvkrQ>