Date: Tue, 5 Dec 2017 11:14:41 -0500 From: Allan Jude <allanjude@freebsd.org> To: freebsd-virtualization@freebsd.org Subject: Re: Storage overhead on zvols Message-ID: <3496de79-8610-5640-0d2c-22031d7e3e5f@freebsd.org> In-Reply-To: <423F466A-732A-4B04-956E-3CC5F5C47390@ebureau.com> References: <CC62E200-A749-4406-AC56-2FC7A104D353@ebureau.com> <CA%2BtpaK3GpzcwvRFGoX5xdmwGnGWay0z_kqgW6Tg7hX5UBbz4og@mail.gmail.com> <423F466A-732A-4B04-956E-3CC5F5C47390@ebureau.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On 2017-12-05 10:20, Dustin Wenz wrote: > Thanks for linking that resource. The purpose of my posting was to increase the body of knowledge available to people who are running bhyve on zfs. It's a versatile way to deploy guests, but I haven't seen much practical advise about doing it efficiently. > > Allan's explanation yesterday of how allocations are padded is exactly the sort of breakdown I could have used when I first started provisioning VMs. I'm sure other people will find this conversation useful as well. > > - .Dustin > This subject is covered in detail in chapter 9 (Tuning) of "FreeBSD Mastery: Advanced ZFS", available from http://www.zfsbook.com/ or any finer book store. >> On Dec 4, 2017, at 9:37 PM, Adam Vande More <amvandemore@gmail.com> wrote: >> >> On Mon, Dec 4, 2017 at 5:19 PM, Dustin Wenz <dustinwenz@ebureau.com> wrote: >> I'm starting a new thread based on the previous discussion in "bhyve uses all available memory during IO-intensive operations" relating to size inflation of bhyve data stored on zvols. I've done some experimenting with this, and I think it will be useful for others. >> >> The zvols listed here were created with this command: >> >> zfs create -o volmode=dev -o volblocksize=Xk -V 30g vm00/chyves/guests/myguest/diskY >> >> The zvols were created on a raidz1 pool of four disks. For each zvol, I created a basic zfs filesystem in the guest using all default tuning (128k recordsize, etc). I then copied the same 8.2GB dataset to each filesystem. >> >> volblocksize size amplification >> >> 512B 11.7x >> 4k 1.45x >> 8k 1.45x >> 16k 1.5x >> 32k 1.65x >> 64k 1x >> 128k 1x >> >> The worst case is with a 512B volblocksize, where the space used is more than 11 times the size of the data stored within the guest. The size efficiency gains are non-linear as I continue from 4k and double the block sizes; 32k blocks being the second-worst. The amount of wasted space was minimized by using 64k and 128k blocks. >> >> It would appear that 64k is a good choice for volblocksize if you are using a zvol to back your VM, and the VM is using the virtual device for a zpool. Incidentally, I believe this is the default when creating VMs in FreeNAS. >> >> I'm not sure what your purpose is behind the posting, but if its simply a "why this behavior" you can find more detail here as well as some calculation leg work: >> >> https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz >> >> -- >> Adam > -- Allan Jude
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3496de79-8610-5640-0d2c-22031d7e3e5f>
