From owner-freebsd-stable@FreeBSD.ORG Sun Mar 3 01:56:47 2013 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id CACFD25A for ; Sun, 3 Mar 2013 01:56:47 +0000 (UTC) (envelope-from karl@denninger.net) Received: from fs.denninger.net (wsip-70-169-168-7.pn.at.cox.net [70.169.168.7]) by mx1.freebsd.org (Postfix) with ESMTP id 852B4DA9 for ; Sun, 3 Mar 2013 01:56:46 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by fs.denninger.net (8.14.6/8.13.1) with ESMTP id r231ug5f045500 for ; Sat, 2 Mar 2013 19:56:43 -0600 (CST) (envelope-from karl@denninger.net) Received: from [127.0.0.1] [192.168.1.40] by Spamblock-sys (LOCAL); Sat Mar 2 19:56:43 2013 Message-ID: <5132ADD4.8050507@denninger.net> Date: Sat, 02 Mar 2013 19:56:36 -0600 From: Karl Denninger User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130215 Thunderbird/17.0.3 MIME-Version: 1.0 To: freebsd-stable@freebsd.org Subject: Re: Musings on ZFS Backup strategies References: <5130BA35.5060809@denninger.net> <5130EB8A.7060706@gmail.com> In-Reply-To: <5130EB8A.7060706@gmail.com> X-Enigmail-Version: 1.5 X-Antivirus: avast! (VPS 130302-1, 03/02/2013), Outbound message X-Antivirus-Status: Clean Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 8bit X-Content-Filtered-By: Mailman/MimeDel 2.1.14 X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 03 Mar 2013 01:56:47 -0000 Quoth Ben Morrow: > I don't know what medium you're backing up to (does anyone use tape any > more?) but when backing up to disk I much prefer to keep the backup in > the form of a filesystem rather than as 'zfs send' streams. One reason > for this is that I believe that new versions of the ZFS code are more > likely to be able to correctly read old versions of the filesystem than > old versions of the stream format; this may not be correct any more, > though. > > Another reason is that it means I can do 'rolling snapshot' backups. I > do an initial dump like this > > # zpool is my working pool > # bakpool is a second pool I am backing up to > > zfs snapshot -r zpool/fs at dump > zfs send -R zpool/fs at dump | zfs recv -vFd bakpool > > That pipe can obviously go through ssh or whatever to put the backup on > a different machine. Then to make an increment I roll forward the > snapshot like this > > zfs rename -r zpool/fs at dump dump-old > zfs snapshot -r zpool/fs at dump > zfs send -R -I @dump-old zpool/fs at dump | zfs recv -vFd bakpool > zfs destroy -r zpool/fs at dump-old > zfs destroy -r bakpool/fs at dump-old > > (Notice that the increment starts at a snapshot called @dump-old on the > send side but at a snapshot called @dump on the recv side. ZFS can > handle this perfectly well, since it identifies snapshots by UUID, and > will rename the bakpool snapshot as part of the recv.) > > This brings the filesystem on bakpool up to date with the filesystem on > zpool, including all snapshots, but never creates an increment with more > than one backup interval's worth of data in. If you want to keep more > history on the backup pool than the source pool, you can hold off on > destroying the old snapshots, and instead rename them to something > unique. (Of course, you could always give them unique names to start > with, but I find it more convenient not to.) Uh, I see a potential problem here. What if the zfs send | zfs recv command fails for some reason before completion? I have noted that zfs recv is atomic -- if it fails for any reason the entire receive is rolled back like it never happened. But you then destroy the old snapshot, and the next time this runs the new gets rolled down. It would appear that there's an increment missing, never to be seen again. What gets lost in that circumstance? Anything changed between the two times -- and silently at that? (yikes!) -- -- Karl Denninger /The Market Ticker ®/ Cuda Systems LLC