Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 21 Sep 2015 09:57:44 -0400
From:      Quartz <quartz@sneakertech.com>
To:        FreeBSD FS <freebsd-fs@freebsd.org>
Subject:   Re: ZFS cpu requirements, with/out compression and/or dedup
Message-ID:  <56000CD8.4030208@sneakertech.com>
In-Reply-To: <CALfReyc1DcNaRjhhhx%2B4swF2hbfuAd2tWv2xpjWtfqcDoxHUBw@mail.gmail.com>
References:  <CAEW%2BogbPswfOWQzbwNZR5qyMrCEfrcSP4Q7%2By4zuKVVD=KNuUA@mail.gmail.com> <55FD9A2B.8060207@sneakertech.com> <CALfReyc1DcNaRjhhhx%2B4swF2hbfuAd2tWv2xpjWtfqcDoxHUBw@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
> This is completely untrue,  there performance issues with dedup are
> limited to writes only, as it needs to check the DDT table for every
> write to the file system with dedup enabled. Once the data is on the
> disk there is no overhead, and in many cases a performance boost as less
> data on the disk means less head movement and its also more likely to be
> in any available caches. If the write performance does become an issue
> you can turn it off on that particular file system. This may cause you
> to no longer have enough capacity on the pool, but then pools are easily
> extended.

It still needs to keep the tables in memory as long as there's still 
deduped data on disk though, right? Else what keeps track of which 
blocks are used by which files?




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?56000CD8.4030208>