Date: Wed, 12 Oct 2011 13:02:47 +0100 From: Johannes Totz <jtotz@imperial.ac.uk> To: freebsd-fs@freebsd.org Subject: Re: ZFS/compression/performance Message-ID: <j73vl7$v02$1@dough.gmane.org> In-Reply-To: <alpine.BSF.2.00.1110111710210.12895@Elmer.dco.penx.com>
index | next in thread | previous in thread | raw e-mail
On 12/10/2011 00:25, Dennis Glatting wrote: > I would appreciate someone knowledgeable in ZFS point me in the right > direction. > > I have several ZFS arrays, some using gzip for compression. The > compressed arrays hold very large text documents (10MB->20TB) and are > highly compressible. Reading the files from a compressed data sets is > fast with little load. However, writing to the compressed data sets > incurs substantial load on the order of a load average from 12 to 20. > > My questions are: > > 1) Why such a heavy load on writing? > 2) What kind of limiters can I put into effect to reduce load > without impacting compressibilty? For example, is there some > variable to controls the number of parallel compression > operations? > > I have a number of different systems. Memory is 24GB on each of the two > large data systems, SSD (Revo) for cache, and a SATA II ZIL. One system > is a 6 core i7 @ 3.33 GHz and the other 4 core ii7 @ 2.93 GHz. The > arrays are RAIDz using cheap 2TB disks. Artem gave you a pretty good explanation. I just did a simple write test yesterday: 1) 6 MB/sec for gzip, 1.36x ratio 2) 34 MB/sec for lzjb, 1.23x ratio I'll stick with lzjb. It's good enough to get rid of most of the redundancy and speed is acceptable.home | help
Want to link to this message? Use this
URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?j73vl7$v02$1>
