Date: Mon, 28 Nov 2011 16:01:56 -0700 From: Techie <techchavez@gmail.com> To: freebsd-fs@freebsd.org Subject: ZFS dedup and replication Message-ID: <CAEUA181wUZC-KjVwcm=tTY0DoBLzrNAuBF3aFimSbLB=xht0jw@mail.gmail.com>
next in thread | raw e-mail | index | archive | help
Hi all, Is there any plans to implement sharing of the ZFS DDT Dedup table or to make ZFS aware of the destination duplicate blocks on a remote system? >From how I understand it, the zfs send/recv stream does not know about the duplicated blocks on the receiving side when using zfs send -D -i to sendonly incremental changes. So take for example I have an application that I backup each night to a ZFS file system. I want to replicate this every night to my remote site. Each night that I back up I create a tar file on the ZFS data file system. When I go to send an incremental stream it sends the entire tar file to the destination even though over 90% of those blocks already exist at the destination.. Is there any plans to make ZFS aware of what exists already at the destination site to eliminate the need to send duplicate blocks over the wire? zfs send -D I believe only eliminates the duplicate blocks within the stream. Perhaps I am wrong.. Thanks Jimmy
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAEUA181wUZC-KjVwcm=tTY0DoBLzrNAuBF3aFimSbLB=xht0jw>
