Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 30 Oct 2008 17:18:00 +0200
From:      Nikolay Denev <ndenev@gmail.com>
To:        Freddie Cash <fjwcash@gmail.com>
Cc:        freebsd-stable@freebsd.org
Subject:   Re: Anyone used rsync scriptology for incremental backup?
Message-ID:  <444B4891-057B-4E09-99A1-A50A1187109E@gmail.com>
In-Reply-To: <200810300804.31186.fjwcash@gmail.com>
References:  <20081029231926.GA35188@0lsen.net> <b269bc570810292200q37939f21tf5918014ade777b2@mail.gmail.com> <95550BEC-DB92-4C68-8409-3DFF7C0B86C0@gmail.com> <200810300804.31186.fjwcash@gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1


On 30 Oct, 2008, at 17:04 , Freddie Cash wrote:

> On October 30, 2008 01:25 am Nikolay Denev wrote:
>> On 30 Oct, 2008, at 07:00 , Freddie Cash wrote:
>>> On Thu, Oct 30, 2008 at 1:50 AM, Andrew Snow <andrew@modulus.org>
>>>
>>> wrote:
>>>> In this way, each day we generate a batch file that lets us step
>>>> back one
>>>> day.  The diffs themselves, compressed with gzip, and extremely
>>>> space efficient.  We can step back potentially hundreds of days,
>>>> though it seems
>>>> to throw errors sometimes when backing up Windows boxes, which I
>>>> haven't
>>>> tracked down yet.
>>>>
>>>> But to be honest, soon you can save yourself a lot of hassle by
>>>> simply using
>>>> ZFS and taking snapshots.  It'll be faster, and with compression
>>>> very space
>>>> efficient.
>>>
>>> That's exactly what we do, use ZFS and RSync.  We have a ZFS
>>> /storage/backup filesystem, with directories for each remote site,
>>> and sub-directories for each server to be backed up.
>>>
>>> Each night we snapshot the directory, then run rsync to backup each
>>> server.  Snapshots are named with the current date.  For 80 FreeBSD
>>> and Linux servers, we average 10 GB of changed data a night.
>>>
>>> No muss, no fuss.  We've used it to restore entire servers (boot off
>>> Knoppix/Frenzy CD, format partitions, rsync back), individual files
>>> (no mounting required, just cd into the .zfs/snapshot/snapshotname
>>> directory and scp the file), and even once to restore the  
>>> permissions
>>> on a pair of servers where a clueless admin "chmod -R user /home"  
>>> and
>>> "chmod -R 777 /home".
>>>
>>> Our backup script is pretty much just a double-for loop that scans a
>>> set of site-name directories for server config files, and runs rsync
>>> in parallel (1 per remote site).
>>>
>>> We we looking into using variations on rsnapshot, custom
>>> squashfs/hardlink stuff, and other solutions, but once we started
>>> using ZFS, we stopped looking down those roads.  We were able to do
>>> in 3 days of testing and scripting what we hadn't been able to do in
>>> almost a month of research and testing.
>
>> Do you experience problems with the snapshots?
>> Last time I tried something similiar for backups the bachine
>> began to spit errors after a few days of snapshots.
>>
>> http://lists.freebsd.org/pipermail/freebsd-fs/2008-February/004413.html
>
> We have 72 daily snapshots so far.  Have had up to 30 of them mounted
> read-only while looking for the right version of a file to restore.
>
> These are ZFS snapshots, very different from UFS snapshots.
>
> -- 
> Freddie Cash
> fjwcash@gmail.com

Yes,

Mine were zfs snapshots too, and I've never managed to create more  
than a
few days worth of snapshots before the machine start to print "bad  
file descriptor" errors
while trying to access the snapshot directory.
But I guess (hope) this problem does not exist anymore when you are  
able to do 72 snapshots.


- --
Regards,
Nikolay Denev




-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.9 (Darwin)

iEYEARECAAYFAkkJ0CgACgkQHNAJ/fLbfrn9pACfSVFPyiHDosaK6FdOdfgo8onL
Ia4An1qUoSnOq/yjIGC5fMngT+PPkEKk
=bWqT
-----END PGP SIGNATURE-----



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?444B4891-057B-4E09-99A1-A50A1187109E>