Date: Wed, 8 Aug 2012 08:41:43 -0700 From: Freddie Cash <fjwcash@gmail.com> To: Brian Gold <bgold@simons-rock.edu> Cc: freebsd-fs@freebsd.org Subject: Re: undoing zfs deduplication Message-ID: <CAOjFWZ5fAF54G%2BoYGPOXRK0ePAbP-MV6-CA2SJGxR6oMgO1Daw@mail.gmail.com> In-Reply-To: <0c8801cd757a$601018e0$20304aa0$@simons-rock.edu> References: <0c8801cd757a$601018e0$20304aa0$@simons-rock.edu>
next in thread | previous in thread | raw e-mail | index | archive | help
On Wed, Aug 8, 2012 at 8:27 AM, Brian Gold <bgold@simons-rock.edu> wrote: > I've got a system running 9.0-release w/ a zfs v28 pool. Within that pool I have 3 datasets, two of which have deduplication > enabled. I've recently been having a lot of performance issues with deduplication and have determined that I need far more ram that > I currently have in order to support dedupe. I don't have the budget for the ram necessary so I would like to move away from > deduplication. I'm aware that you can't simply turn dedupe off, you need to completely nuke the filesystem. > > What I'm wondering is, would it be possible for me to create new datasets within the same pool (I have a ton of available space) and > use a combination of "zfs send" & "zfs receive" to migrate my deduped datasets and all of their snapshots (daily, weekly, & monthly) > over to the new dataset? Yes, that is the only option for "un-deduping" a filesystem. zfs send/recv from the deduped filesystem to one with dedup=off. Then delete the deduped filesystem. Note: a "zfs destroy" will use a lot of RAM as it has to go through an update all the DDT entries. You may have to manually delete individual snapshots, and then manually delete individual directories in the filesystem, before destroying the actual filesystem. You may run into a situation where you don't have enough RAM/ARC to destroy a deduped filesystem. -- Freddie Cash fjwcash@gmail.com
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAOjFWZ5fAF54G%2BoYGPOXRK0ePAbP-MV6-CA2SJGxR6oMgO1Daw>