Date: Tue, 11 Oct 2011 17:25:41 -0600 (MDT) From: Dennis Glatting <freebsd@penx.com> To: freebsd-fs@freebsd.org Subject: ZFS/compression/performance Message-ID: <alpine.BSF.2.00.1110111710210.12895@Elmer.dco.penx.com>
next in thread | raw e-mail | index | archive | help
I would appreciate someone knowledgeable in ZFS point me in the right direction. I have several ZFS arrays, some using gzip for compression. The compressed arrays hold very large text documents (10MB->20TB) and are highly compressible. Reading the files from a compressed data sets is fast with little load. However, writing to the compressed data sets incurs substantial load on the order of a load average from 12 to 20. My questions are: 1) Why such a heavy load on writing? 2) What kind of limiters can I put into effect to reduce load without impacting compressibilty? For example, is there some variable to controls the number of parallel compression operations? I have a number of different systems. Memory is 24GB on each of the two large data systems, SSD (Revo) for cache, and a SATA II ZIL. One system is a 6 core i7 @ 3.33 GHz and the other 4 core ii7 @ 2.93 GHz. The arrays are RAIDz using cheap 2TB disks. Thanks.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?alpine.BSF.2.00.1110111710210.12895>