Date: Tue, 20 Aug 2013 08:22:32 +0100 From: krad <kraduk@gmail.com> To: Johan Hendriks <joh.hendriks@gmail.com> Cc: freebsd-fs <freebsd-fs@freebsd.org>, Ivan Voras <ivoras@freebsd.org> Subject: Re: Upgrading ZFS compression Message-ID: <CALfReyf0FyppzW3YkM%2ByEFWGeYtKA3g8J9jmqOfGhj7xr-UnbA@mail.gmail.com> In-Reply-To: <CAOaKuAWDELJPpZhq1cLAL=zWhD%2B3YY8V6mBKD=wsdLPhtnzTRg@mail.gmail.com> References: <kut1oq$plv$1@ger.gmane.org> <CAOaKuAWDELJPpZhq1cLAL=zWhD%2B3YY8V6mBKD=wsdLPhtnzTRg@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
correct, same when you enable dedup as well, only newly writen blocks get the changes. So its possible a file of multiple blocks may have multiple compression algorithms applied to it. What I have done in the past is rsync the tree to a new location, then rename the trees and delete the original. This isn't always releasable though On 19 August 2013 18:23, Johan Hendriks <joh.hendriks@gmail.com> wrote: > Op maandag 19 augustus 2013 schreef Ivan Voras (ivoras@freebsd.org): > > > Hello, > > > > Just a quick question: if I have a file system with LZJB, write a file > > on it so it gets compressed, then change the compression setting on the > > file system to LZ4, will new random writes to the file use the new > > compression algorithm? > > > > By looking at the data structures (dnode_phys_t) it looks like the > > compression is set per-file object, so no. > > > > OTOH, new files on the file system will pick up new compression > > settings, right? > > > As far as i know all new files put on the dataset will be compressed using > the new compression type. > > Regards > Johan > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CALfReyf0FyppzW3YkM%2ByEFWGeYtKA3g8J9jmqOfGhj7xr-UnbA>