Date: Fri, 14 Oct 2011 07:50:37 -0700 From: Artem Belevich <art@freebsd.org> To: =?ISO-8859-2?Q?Radio_m=B3odych_bandyt=F3w?= <radiomlodychbandytow@o2.pl> Cc: freebsd-fs@freebsd.org Subject: Re: ZFS/compression/performance Message-ID: <CAFqOu6gBiounMAzvrW8orNKRCJXQ34ujdABSMOk-tnBomPUHgQ@mail.gmail.com> In-Reply-To: <4E97D24C.4010606@o2.pl> References: <20111013120032.D6BA71065760@hub.freebsd.org> <4E97D24C.4010606@o2.pl>
next in thread | previous in thread | raw e-mail | index | archive | help
2011/10/13 Radio m³odych bandytów <radiomlodychbandytow@o2.pl>: > On 2011-10-13 14:00, freebsd-fs-request@freebsd.org wrote: >> >> An option is not too compress with ZFS rather directly with gzip however I >> would still need lots of temporary storage for manipulation, which is what >> I am doing now (e.g., sort). Processing with zcat isn't always a good >> solution because some applications want files, but you have to do what you >> have to do. > > It seems that with your data gzipping directly is a better option. Though I > suggest that you experiment with codecs that support larger dictionary, i.e. > 7zip, I expect that you would see huge strength improvement with something > like 7z a -mx=1 -md=26 out.7z in. You can use higher -md values if you have > enough memory, compression mode 1 (mx=1) uses 4,5*2^md bytes of RAM, so if > my maths is good, md=26 uses ~288 MB. If LZMA is too slow, you can at least > try 7-zip's deflate64. It's not great, but not as bad as gzip. Yup. Stand-alone archiver may work better. ZFS compression works on blocks. Subsequent blocks can't benefit from the data gathered compressing preceding block, so overall compression rate with ZFS would be lower than that of stand-alone gzip with the same compression level. On the other hand, ZFS will parallelize compression and on multi-core machine it would compress the same amount of data in less time than single instance of gzip would. --Artem
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAFqOu6gBiounMAzvrW8orNKRCJXQ34ujdABSMOk-tnBomPUHgQ>
