From owner-freebsd-stable@FreeBSD.ORG Sat Mar 2 22:14:57 2013 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 996C487C for ; Sat, 2 Mar 2013 22:14:57 +0000 (UTC) (envelope-from peter@rulingia.com) Received: from vps.rulingia.com (host-122-100-2-194.octopus.com.au [122.100.2.194]) by mx1.freebsd.org (Postfix) with ESMTP id 2F36E6A3 for ; Sat, 2 Mar 2013 22:14:56 +0000 (UTC) Received: from server.rulingia.com (c220-239-237-213.belrs5.nsw.optusnet.com.au [220.239.237.213]) by vps.rulingia.com (8.14.5/8.14.5) with ESMTP id r22MEqVp025168 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Sun, 3 Mar 2013 09:14:53 +1100 (EST) (envelope-from peter@rulingia.com) X-Bogosity: Ham, spamicity=0.000000 Received: from server.rulingia.com (localhost.rulingia.com [127.0.0.1]) by server.rulingia.com (8.14.5/8.14.5) with ESMTP id r22MEl7v053224 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Sun, 3 Mar 2013 09:14:47 +1100 (EST) (envelope-from peter@server.rulingia.com) Received: (from peter@localhost) by server.rulingia.com (8.14.5/8.14.5/Submit) id r22MEknq053223 for freebsd-stable@freebsd.org; Sun, 3 Mar 2013 09:14:46 +1100 (EST) (envelope-from peter) Date: Sun, 3 Mar 2013 09:14:46 +1100 From: Peter Jeremy To: freebsd-stable@freebsd.org Subject: Re: Musings on ZFS Backup strategies Message-ID: <20130302221446.GG286@server.rulingia.com> References: <20130301165040.GA26251@anubis.morrow.me.uk> <20130301185912.GA27546@anubis.morrow.me.uk> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="WYTEVAkct0FjGQmd" Content-Disposition: inline In-Reply-To: X-PGP-Key: http://www.rulingia.com/keys/peter.pgp User-Agent: Mutt/1.5.21 (2010-09-15) X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 02 Mar 2013 22:14:57 -0000 --WYTEVAkct0FjGQmd Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On 2013-Mar-01 08:24:53 -0600, Karl Denninger wrote: >If I then restore the base and snapshot, I get back to where I was when >the latest snapshot was taken. I don't need to keep the incremental >snapshot for longer than it takes to zfs send it, so I can do: > >zfs snapshot pool/some-filesystem@unique-label >zfs send -i pool/some-filesystem@base pool/some-filesystem@unique-label >zfs destroy pool/some-filesystem@unique-label > >and that seems to work (and restore) just fine. This gives you an incremental since the base snapshot - which will probably grow in size over time. If you are storing the ZFS send streams on (eg) tape, rather than receiving them, you probably still want the "Towers of Hanoi" style backup hierarchy to control your backup volume. It's also worth noting that whilst the stream will contain the compression attributes of the filesystem(s) in it, the actual data is the stream in uncompressed >This in turn means that keeping more than two incremental dumps offline >has little or no value; the second merely being taken to insure that >there is always at least one that has been written to completion without >error to apply on top of the base. This is quite a critical point with this style of backup: The ZFS send stream is not intended as an archive format. It includes error detection but no error correction and any error in a stream renders the whole stream unusable (you can't retrieve only part of a stream). If you go this way, you probably want to wrap the stream in a FEC container (eg based on ports/comms/libfec) and/or keep multiple copies. The "recommended" approach is to do zfs send | zfs recv and store a replica of your pool (with whatever level of RAID that meets your needs). This way, you immediately detect an error in the send stream and can repeat the send. You then use scrub to verify (and recover) the replica. >(Yes, I know, I've been a ZFS resister.... ;-)) "Resistance is futile." :-) On 2013-Mar-01 15:34:39 -0500, Daniel Eischen wrote: >It wasn't clear that snapshots were traversable as a normal >directory structure. I was thinking it was just a blob >that you had to roll back to in order to get anything out >of it. Snapshots appear in a .zfs/snapshot/SNAPSHOT_NAME directory at each mountpoint and are accessible as a normal read-only directory hierarchy below there. OTOH, the send stream _is_ a blob. >Am I correct in assuming that one could: > > # zfs send -R snapshot | dd obs=3D10240 of=3D/dev/rst0 > >to archive it to tape instead of another [system:]drive? Yes. The output from zfs send is a stream of bytes that you can treat as you would any other stream of bytes. But this approach isn't recommended. --=20 Peter Jeremy --WYTEVAkct0FjGQmd Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (FreeBSD) iEYEARECAAYFAlEyedYACgkQ/opHv/APuIeXHgCgtDsDGKBTo9GmqRw+iE5FUpQb BHkAoJtSWoLcZoKxbLkL3pUNG/B5dJ8X =NogE -----END PGP SIGNATURE----- --WYTEVAkct0FjGQmd--