Date: Sat, 19 Sep 2015 13:23:55 -0400 From: Quartz <quartz@sneakertech.com> To: freebsd-fs@freebsd.org Subject: Re: ZFS cpu requirements, with/out compression and/or dedup Message-ID: <55FD9A2B.8060207@sneakertech.com> In-Reply-To: <CAEW%2BogbPswfOWQzbwNZR5qyMrCEfrcSP4Q7%2By4zuKVVD=KNuUA@mail.gmail.com> References: <CAEW%2BogbPswfOWQzbwNZR5qyMrCEfrcSP4Q7%2By4zuKVVD=KNuUA@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
> from what i read the status of dedup is not that clear and seems there are > bugs and better to avoid it? There aren't [m]any legitimate bugs with dedup, the problem is that it consumes metric assloads of ram, on the order of 2-5GB per TB of disk space (this is in addition to whatever the ARC eats). If you run out of ram performance jumps off a cliff. It's also 'permanent' in the sense that you have to turn it on with the creation of a dataset and can't disable it without nuking said dataset. The issue of "bugs" are probably from people experiencing file corruption because they enabled only block checksums without also exact verification, and don't understand hash collisions. > so according to 1-3 above what cpu requirements i need? Not much. CPU only really takes a hit when you turn on gzip compression and crank it up to high levels. > supermicro c2750/3/5/8 enough to run system of 20TB /40TB with 1-3 above? > if dedup IS enabled would it still work fine? Probably not. That board only supports up to 64GB of ram. 20TB of disk space with dedup will require like 50-100 GB of ram, and 40TB will need a good 90GB at least.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?55FD9A2B.8060207>