Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 19 Nov 2012 20:20:26 -0800
From:      Artem Belevich <art@freebsd.org>
To:        kpneal@pobox.com
Cc:        freebsd-fs <freebsd-fs@freebsd.org>
Subject:   Re: SSD recommendations for ZFS cache/log
Message-ID:  <CAFqOu6hAQL9Y6CK9=cxNiyiOjRYLyKsSBVoMRjw2uFFedCG2kQ@mail.gmail.com>
In-Reply-To: <20121120040258.GA27849@neutralgood.org>
References:  <CAFHbX1K-NPuAy5tW0N8=sJD=CU0Q1Pm3ZDkVkE%2BdjpCsD1U8_Q@mail.gmail.com> <57ac1f$gf3rkl@ipmail05.adl6.internode.on.net> <50A31D48.3000700@shatow.net> <CAF6rxgkh6C0LKXOZa264yZcA3AvQdw7zVAzWKpytfh0%2BKnLOJg@mail.gmail.com> <20121116044055.GA47859@neutralgood.org> <CACpH0MfQWokFZkh58qm%2B2_tLeSby9BWEuGjkH15Nu3%2BS1%2Bp3SQ@mail.gmail.com> <50A64694.5030001@egr.msu.edu> <20121117181803.GA26421@neutralgood.org> <20121117225851.GJ1462@egr.msu.edu> <20121120040258.GA27849@neutralgood.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, Nov 19, 2012 at 8:02 PM,  <kpneal@pobox.com> wrote:
> Advising people to use dedup when high dedup ratios are expected, and
> advising people to otherwise not use dedup, is by itself incorrect advice.
> Rather, dedup should only be enabled on a system with a large amount of
> memory. The usual advice of 1G of ram per 1TB of disk is flat out wrong.
>
> Now, I do not know how much memory to give as a minimum. I suspect that
> the minimum should be more like 16-32G, with more if large amounts of
> deduped data are to be removed by destroying entire datasets. But that's
> just a guess.

For what it's worth, Oracle has published an article on memory sizing
for dedupe.
http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-113-size-zfs-dedup-1354231.html

In a nutshell, it's 320 bytes per record. Number of records will
depend on your data set and the way it's been written.

--Artem



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAFqOu6hAQL9Y6CK9=cxNiyiOjRYLyKsSBVoMRjw2uFFedCG2kQ>