Date: Mon, 05 Nov 2012 15:42:45 +1000 From: Da Rock <freebsd-questions@herveybayaustralia.com.au> To: freebsd-questions@freebsd.org Subject: Re: Questions about dump/restore to/from DVD media Message-ID: <509751D5.7060902@herveybayaustralia.com.au> In-Reply-To: <20121105051447.6eef32ef.freebsd@edvax.de> References: <20121105035233.e3c4ae8a.freebsd@edvax.de> <22095.1352087364@tristatelogic.com> <20121105051447.6eef32ef.freebsd@edvax.de>
next in thread | previous in thread | raw e-mail | index | archive | help
On 11/05/12 14:14, Polytropon wrote: > On Sun, 04 Nov 2012 19:49:24 -0800, Ronald F. Guilmette wrote: >> In message <20121105035233.e3c4ae8a.freebsd@edvax.de>, >> Polytropon <freebsd@edvax.de> wrote: >> >>>> But as I said (above) to make this really work right, dump & restore really >>>> need to have -z options, and do the zipping/unzipping internally. Only >>>> if this were available could dump properly deal with end-of-media on any >>>> given output volume, I think. >>> The problem is that delegating compression to a "sub-task" would >>> imply that dump cannot precisely adjust its output to match the >>> media size (as the limit is now defined by how good the compression >>> works). >> Correct. We have both just said the exact same thing in different ways. >> >> In order to have _compression_ of the dump data _and_ still be able to >> divide the (post-compression) data into nice proper 2KB chunks (as required >> for DVD+/-R writing) the compression step itself would need to be integrated >> into the dump program itself (and then, for symmetry, if for no other >> reason, into restore as well). > Chunk size _and_ media size matter (as dump would have to "know" > when the media is expected to be "nearly-full" _with_ compression) > because the operator will be required to deal with multi-volume > media ("next DVD"). > > > >>>> (I hate to say it, because in general I loath & despise Windows, but even >>>> Windows has a built-in facility for making a single backup of an _entire_ >>>> system, and in a single step, *and*, I presume in a space-efficient manner.) >>> That would be a task for dd. :-) >> Sorry? I am not following you. >> >> How could dd ever substitute for the intelligence of dump(8), and specifically >> how could it avoid copying of blocks that are ``in'' the filesystem but which >> are not currently _allocated_ by the filesystem? > It cannot. :-) > > With dd, you could copy a disk including all aspects of the > present slices and partitions (including file attributes and > partitioning data, even boot elements), but it would maybe > require a subsequent "read and compare" step to make sure > that everything went well. > > > >> (I am also not persuaded the dd could handle multiple partitions any better >> that dump(8) currently does... which is to say not at all, really.) > It can - depending on what device you're reading from. > > Examples: > > dd if=/dev/ad0s1a -> the root partition > dd if=/dev/ad0s1 -> the 1st slice > dd if=/dev/ad0 -> the whole disk > > However, dd is very much "bare metal" and cannot handle multiple > volumes and compression natively. It would be neccessary to have > all those functionalities scripted additionally. For reference, if one did backup the whole slice/disk using dd and then compressed the data, would that effectively compress all those 'unallocated' nodes?
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?509751D5.7060902>