From owner-freebsd-stable@FreeBSD.ORG Sun Mar 3 04:23:15 2013 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id C53D0A38 for ; Sun, 3 Mar 2013 04:23:15 +0000 (UTC) (envelope-from mauzo@anubis.morrow.me.uk) Received: from isis.morrow.me.uk (isis.morrow.me.uk [204.109.63.142]) by mx1.freebsd.org (Postfix) with ESMTP id 85FB82B5 for ; Sun, 3 Mar 2013 04:23:15 +0000 (UTC) Received: from anubis.morrow.me.uk (host86-177-98-144.range86-177.btcentralplus.com [86.177.98.144]) (Authenticated sender: mauzo) by isis.morrow.me.uk (Postfix) with ESMTPSA id BDA544504E; Sun, 3 Mar 2013 04:23:07 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.7.4 isis.morrow.me.uk BDA544504E DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=morrow.me.uk; s=dkim201101; t=1362284588; bh=oSxkz3ZESlKXiLXFzNvF/c+7pKhbofHkyViW+0Xd+6U=; h=Date:From:To:Subject:References:In-Reply-To; b=kki9bgf2Ee1DFYiOChRnDJk5/YqJAPqPNva7sfT0snMdibLPZtIjQzDn3LaWsseFy KISwObWe7cO+5xaMgPPeB05+/dVYn1VSJ/tP0qCIXA6ZN0+qRofECnrpkkQihSL9Nr U8VAp28OHGSYarUi0LCK8fdPwCeSwQMwW3Zfci9M= X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.97.6 at isis.morrow.me.uk Received: by anubis.morrow.me.uk (Postfix, from userid 5001) id B42E69EBB; Sun, 3 Mar 2013 04:23:05 +0000 (GMT) Date: Sun, 3 Mar 2013 04:23:05 +0000 From: Ben Morrow To: karl@denninger.net, freebsd-stable@freebsd.org Subject: Re: Musings on ZFS Backup strategies Message-ID: <20130303042301.GA54356@anubis.morrow.me.uk> References: <5130BA35.5060809@denninger.net> <5130EB8A.7060706@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5132ADD4.8050507@denninger.net> X-Newsgroups: gmane.os.freebsd.stable Organization: morrow.me.uk User-Agent: Mutt/1.5.21 (2010-09-15) X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 03 Mar 2013 04:23:15 -0000 Quoth Karl Denninger : > Quoth Ben Morrow: > > I don't know what medium you're backing up to (does anyone use tape any > > more?) but when backing up to disk I much prefer to keep the backup in > > the form of a filesystem rather than as 'zfs send' streams. One reason > > for this is that I believe that new versions of the ZFS code are more > > likely to be able to correctly read old versions of the filesystem than > > old versions of the stream format; this may not be correct any more, > > though. > > > > Another reason is that it means I can do 'rolling snapshot' backups. I > > do an initial dump like this > > > > # zpool is my working pool > > # bakpool is a second pool I am backing up to > > > > zfs snapshot -r zpool/fs at dump > > zfs send -R zpool/fs at dump | zfs recv -vFd bakpool > > > > That pipe can obviously go through ssh or whatever to put the backup on > > a different machine. Then to make an increment I roll forward the > > snapshot like this > > > > zfs rename -r zpool/fs at dump dump-old > > zfs snapshot -r zpool/fs at dump > > zfs send -R -I @dump-old zpool/fs at dump | zfs recv -vFd bakpool > > zfs destroy -r zpool/fs at dump-old > > zfs destroy -r bakpool/fs at dump-old > > > > (Notice that the increment starts at a snapshot called @dump-old on the > > send side but at a snapshot called @dump on the recv side. ZFS can > > handle this perfectly well, since it identifies snapshots by UUID, and > > will rename the bakpool snapshot as part of the recv.) > > > > This brings the filesystem on bakpool up to date with the filesystem on > > zpool, including all snapshots, but never creates an increment with more > > than one backup interval's worth of data in. If you want to keep more > > history on the backup pool than the source pool, you can hold off on > > destroying the old snapshots, and instead rename them to something > > unique. (Of course, you could always give them unique names to start > > with, but I find it more convenient not to.) > > Uh, I see a potential problem here. > > What if the zfs send | zfs recv command fails for some reason before > completion? I have noted that zfs recv is atomic -- if it fails for any > reason the entire receive is rolled back like it never happened. > > But you then destroy the old snapshot, and the next time this runs the > new gets rolled down. It would appear that there's an increment > missing, never to be seen again. No, if the recv fails my backup script aborts and doesn't delete the old snapshot. Cleanup then means removing the new snapshot and renaming the old back on the source zpool; in my case I do this by hand, but it could be automated given enough thought. (The names of the snapshots on the backup pool don't matter; they will be cleaned up by the next successful recv.) > What gets lost in that circumstance? Anything changed between the two > times -- and silently at that? (yikes!) It's impossible to recv an incremental stream on top of the wrong snapshot (identified by UUID, not by its current name), so nothing can get silently lost. A 'zfs recv -F' will find the correct starting snapshot on the destination filesystem (assuming it's there) regardless of its name, and roll forward to the state as of the end snapshot. If a recv succeeds you can be sure nothing up to that point has been missed. The worst that can happen is if you mistakenly delete the snapshot on the source pool that marks the end of the last successful recv on the backup pool; in that case you have to take an increment from further back (which will therefore be a larger incremental stream than it needed to be). The very worst case is if you end up without any snapshots in common between the source and backup pools, and you have to start again with a full dump. Ben