Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 18 Apr 2011 13:15:17 -0700
From:      Artem Belevich <art@freebsd.org>
To:        "Vladislav V. Prodan" <universite@ukr.net>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: Prompt to synchronize two volumes ZFS
Message-ID:  <BANLkTim-zK1cgjbaqDFQfmXWNVD4rCLEkw@mail.gmail.com>
In-Reply-To: <4DAC96EA.8080505@ukr.net>
References:  <4DAC7811.3090407@ukr.net> <BANLkTinsvVzaY9yiQ5QvVGK7gg7cPFMRcA@mail.gmail.com> <4DAC96EA.8080505@ukr.net>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, Apr 18, 2011 at 12:54 PM, Vladislav V. Prodan
<universite@ukr.net> wrote:
> 18.04.2011 21:12, Artem Belevich wrote:
>>
>> This page outlines ZFS replication process fairly well:
>> http://wikitech-static.wikimedia.org/articles/z/f/s/Zfs_replication.html
>
> I do not understand why dumps snapshot to a file and then deploy on a remote
> machine?
> You can not like something is poured snapshot to another server?
>
> zfs send -i export/upload@zrep-00001 export/upload@zrep-00002 | ssh
> otherservername "cat > /export/save/upload@zrep-00002"
> cat /export/save/upload@rzrepl-00002 | zfs recv export/upload
>
> Maybe there are some pitfalls?

They mentioned performance. mbuffer in-between receive and send makes
*a lot* of difference as long as you provide few seconds worth of
buffering at the rate your filesystems can sustain. I think the
authors of the page above just didn't use large enough buffer. You
would probably have to experiment yourself. In my case of ~3TB
transfer (mostly large files), I ended up with "mbuffer -m512M". I
also used mbuffer's built-in network transfer mechanism (see mbuffer's
-I/-O options) as at high data rates ssh became the bottleneck.

--Artem



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?BANLkTim-zK1cgjbaqDFQfmXWNVD4rCLEkw>