From owner-freebsd-fs@FreeBSD.ORG Mon Nov 28 23:23:49 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 0BDA7106564A for ; Mon, 28 Nov 2011 23:23:49 +0000 (UTC) (envelope-from techchavez@gmail.com) Received: from mail-ey0-f182.google.com (mail-ey0-f182.google.com [209.85.215.182]) by mx1.freebsd.org (Postfix) with ESMTP id 9E0DF8FC08 for ; Mon, 28 Nov 2011 23:23:48 +0000 (UTC) Received: by eaai12 with SMTP id i12so3504606eaa.13 for ; Mon, 28 Nov 2011 15:23:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; bh=9zmPTOY+NwTTsJks4wZkc03GSUlvQPuzlYVD2zdMqaM=; b=QFzSEa0gQeSV2ReYIhNY+dyEmLUIi1ho5MLPyQ8cjWEl4sPS7YhryOoyrRk7ZFnznz Rm0yjtHzC0rSrDwusoGWBS28gi2E0ZmvuPw9I2VTOsPICRsRj6xyKo9CNDu9ylenYm3K l85HN71Xqt9aABOndVg9/RPV0aZ2kJniYAav0= MIME-Version: 1.0 Received: by 10.180.103.131 with SMTP id fw3mr46785734wib.57.1322521316727; Mon, 28 Nov 2011 15:01:56 -0800 (PST) Received: by 10.180.94.197 with HTTP; Mon, 28 Nov 2011 15:01:56 -0800 (PST) Date: Mon, 28 Nov 2011 16:01:56 -0700 Message-ID: From: Techie To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 Subject: ZFS dedup and replication X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Nov 2011 23:23:49 -0000 Hi all, Is there any plans to implement sharing of the ZFS DDT Dedup table or to make ZFS aware of the destination duplicate blocks on a remote system? >From how I understand it, the zfs send/recv stream does not know about the duplicated blocks on the receiving side when using zfs send -D -i to sendonly incremental changes. So take for example I have an application that I backup each night to a ZFS file system. I want to replicate this every night to my remote site. Each night that I back up I create a tar file on the ZFS data file system. When I go to send an incremental stream it sends the entire tar file to the destination even though over 90% of those blocks already exist at the destination.. Is there any plans to make ZFS aware of what exists already at the destination site to eliminate the need to send duplicate blocks over the wire? zfs send -D I believe only eliminates the duplicate blocks within the stream. Perhaps I am wrong.. Thanks Jimmy