From owner-freebsd-fs@FreeBSD.ORG Wed Aug 8 15:35:18 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 33E33106566B for ; Wed, 8 Aug 2012 15:35:18 +0000 (UTC) (envelope-from bgold@simons-rock.edu) Received: from hedwig.simons-rock.edu (hedwig.simons-rock.edu [208.81.88.14]) by mx1.freebsd.org (Postfix) with ESMTP id 0C2088FC0C for ; Wed, 8 Aug 2012 15:35:17 +0000 (UTC) Received: from behemoth (behemoth.simons-rock.edu [10.30.2.44]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) by hedwig.simons-rock.edu (Postfix) with ESMTPSA id D160514B for ; Wed, 8 Aug 2012 11:28:01 -0400 (EDT) From: "Brian Gold" To: Date: Wed, 8 Aug 2012 11:27:51 -0400 Message-ID: <0c8801cd757a$601018e0$20304aa0$@simons-rock.edu> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit X-Mailer: Microsoft Outlook 14.0 Thread-Index: Ac11eXkMakf2B490Qymv7m3VVLtyfw== Content-Language: en-us Subject: undoing zfs deduplication X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Aug 2012 15:35:18 -0000 I've got a system running 9.0-release w/ a zfs v28 pool. Within that pool I have 3 datasets, two of which have deduplication enabled. I've recently been having a lot of performance issues with deduplication and have determined that I need far more ram that I currently have in order to support dedupe. I don't have the budget for the ram necessary so I would like to move away from deduplication. I'm aware that you can't simply turn dedupe off, you need to completely nuke the filesystem. What I'm wondering is, would it be possible for me to create new datasets within the same pool (I have a ton of available space) and use a combination of "zfs send" & "zfs receive" to migrate my deduped datasets and all of their snapshots (daily, weekly, & monthly) over to the new dataset? Brian Gold System Administrator Bard College at Simon's Rock