Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 21 Sep 2015 22:41:31 +0100
From:      Matthew Seaman <matthew@FreeBSD.org>
To:        freebsd-fs@freebsd.org
Subject:   Re: ZFS cpu requirements, with/out compression and/or dedup
Message-ID:  <5600798B.3010208@FreeBSD.org>
In-Reply-To: <20150921211335.GB41102@server.rulingia.com>
References:  <CAEW%2BogbPswfOWQzbwNZR5qyMrCEfrcSP4Q7%2By4zuKVVD=KNuUA@mail.gmail.com> <alpine.GSO.2.01.1509190843040.1673@freddy.simplesystems.org> <20150921170216.GA98888@blazingdot.com> <20150921211335.GB41102@server.rulingia.com>

next in thread | previous in thread | raw e-mail | index | archive | help
This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--O2MrgKL7ShGv9ETuuIKSiFgd0NsfHJUMP
Content-Type: text/plain; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

On 21/09/2015 22:13, Peter Jeremy wrote:
> In general, the downsides of dedup outweigh the benefits.  If you alrea=
dy
> have the data in ZFS, you can use 'zdb -S' to see what effect rebuildin=
g
> the pool with dedup enabled would have - how much disk space you will s=
ave
> and how big the DDT is (and hence how much RAM you will need).  If you =
can
> afford it, make sure you keep good backups, enable DDT and be ready to =
nuke
> the pool and restore from backups if dedup doesn't work out.

Nuking the entire pool is a little heavy handed.  Dedup can be turned on
and off on a per-ZFS basis.  If you've a ZFS that had dedup enabled, you
can remove the effects by zfs send / zfs recv to create a pristine
un-deduped copy of the data, destroy the original zfs and rename the new
one to take its place.  Of course, this depends on your having enough
free space in the pool to be able to duplicate (and then some) that ZFS.

Failing that, you might be able to 'zpool split' if your pool is
composed entirely of mirrors.  So long as you're able to do without
resilience for a while this basically doubles the space you have
available to play with.  You can then destroy the contents of one of the
split zpools, and zfs send the data over from the other split pool.
Unfortunately there isn't a reciprocal 'zfs rejoin' command that undoes
the splitting, so you'll have to destroy one of the splits and re-add
the constituent devices back to restore the mirroring in the other
split.  Which is a delicate operations and not one which is forgiving of
mistakes.

And failing that, you can start pushing data over the network, but
that's hardly different to restoring from backup.  However, either of
the first two choices should be significantly faster if you have large
quantities of data to handle.

	Cheers,

	Matthew



--O2MrgKL7ShGv9ETuuIKSiFgd0NsfHJUMP
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2

iQJ8BAEBCgBmBQJWAHmSXxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w
ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXQ2NTNBNjhCOTEzQTRFNkNGM0UxRTEzMjZC
QjIzQUY1MThFMUE0MDEzAAoJELsjr1GOGkATd+wP/2PQGIgUJYI8tn+ygFbe0AHO
IL2QcwFy2cE/CTz8ubkqQm6zAvTZ//EYez2tQV+tpf/IWk6kRa194I2fsFG7D0/o
Fa9N7P0AQ11Skv5BRI6bjjIPFHfvazg0aDdiEQH5iC4AsJMYT0J9+/fqaY8idjnX
VG921iKSbl9EXtRKBJhNz8uJyvMI1Zxl0DorKce+lrpGqHNBIjBLs8jwGPlQ3Fm4
yKGBHPUOrQtT0JDQZzDyCWl7ed1TR4OG3pZRvLIHKTz1tEoWGfB69e25eMsnXXSt
2kGbhbKWHndl3xre7yj1lqxsoqyfBuMmke9TLcRuVAhbYVzOuEeQQAozpWYgfKnJ
X9VjvRFN+n9sLhUF9iWmdGNF9GkgIBMPh5XzwDIypuZCAYiZuGn7Dx6QojG8dQSi
LQDfA7T1On/m6uFtOn11xhSI5Fg4lytlZtwD+8UWAYenOcM1USzVoyrOayWB+Xvh
6B5kFJMENEvxXPYpwTchoUo5oAyl6T687ytJvyu5/a1rhF/uANF6u1ljP4y4gPnb
Glw2FRiWZpWAWrkGYXroxmMvPRtJ97S95D5vDUqC3erah3wchFW/51KU28UVxPM/
x9tyeHmXMLCmQYagecEClnSDBixfGIh1eAR0In4sNqWoANiyLd3uuVn1EYWngMaX
5a8w3Q+oKHI4XlujSDyU
=PWfM
-----END PGP SIGNATURE-----

--O2MrgKL7ShGv9ETuuIKSiFgd0NsfHJUMP--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?5600798B.3010208>