Date: Tue, 5 Dec 2017 09:20:43 -0600 From: Dustin Wenz <dustinwenz@ebureau.com> To: Adam Vande More <amvandemore@gmail.com> Cc: FreeBSD virtualization <freebsd-virtualization@freebsd.org> Subject: Re: Storage overhead on zvols Message-ID: <423F466A-732A-4B04-956E-3CC5F5C47390@ebureau.com> In-Reply-To: <CA%2BtpaK3GpzcwvRFGoX5xdmwGnGWay0z_kqgW6Tg7hX5UBbz4og@mail.gmail.com> References: <CC62E200-A749-4406-AC56-2FC7A104D353@ebureau.com> <CA%2BtpaK3GpzcwvRFGoX5xdmwGnGWay0z_kqgW6Tg7hX5UBbz4og@mail.gmail.com>
index | next in thread | previous in thread | raw e-mail
[-- Attachment #1 --] Thanks for linking that resource. The purpose of my posting was to increase the body of knowledge available to people who are running bhyve on zfs. It's a versatile way to deploy guests, but I haven't seen much practical advise about doing it efficiently. Allan's explanation yesterday of how allocations are padded is exactly the sort of breakdown I could have used when I first started provisioning VMs. I'm sure other people will find this conversation useful as well. - .Dustin > On Dec 4, 2017, at 9:37 PM, Adam Vande More <amvandemore@gmail.com> wrote: > > On Mon, Dec 4, 2017 at 5:19 PM, Dustin Wenz <dustinwenz@ebureau.com> wrote: > I'm starting a new thread based on the previous discussion in "bhyve uses all available memory during IO-intensive operations" relating to size inflation of bhyve data stored on zvols. I've done some experimenting with this, and I think it will be useful for others. > > The zvols listed here were created with this command: > > zfs create -o volmode=dev -o volblocksize=Xk -V 30g vm00/chyves/guests/myguest/diskY > > The zvols were created on a raidz1 pool of four disks. For each zvol, I created a basic zfs filesystem in the guest using all default tuning (128k recordsize, etc). I then copied the same 8.2GB dataset to each filesystem. > > volblocksize size amplification > > 512B 11.7x > 4k 1.45x > 8k 1.45x > 16k 1.5x > 32k 1.65x > 64k 1x > 128k 1x > > The worst case is with a 512B volblocksize, where the space used is more than 11 times the size of the data stored within the guest. The size efficiency gains are non-linear as I continue from 4k and double the block sizes; 32k blocks being the second-worst. The amount of wasted space was minimized by using 64k and 128k blocks. > > It would appear that 64k is a good choice for volblocksize if you are using a zvol to back your VM, and the VM is using the virtual device for a zpool. Incidentally, I believe this is the default when creating VMs in FreeNAS. > > I'm not sure what your purpose is behind the posting, but if its simply a "why this behavior" you can find more detail here as well as some calculation leg work: > > https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz > > -- > Adam [-- Attachment #2 --] 0 *H 010 + 0 *H 00A0 *H 010 UUS10U Minnesota10USaint Cloud10U eBureau10UIntegration10Uebureau.com1"0 *H support@ebureau.com0 170505161615Z 270503161615Z0J10 UUS10UDustin Wenz1%0# *H dustinwenz@ebureau.com0"0 *H 0 rW{aQFb~˞C .h`h]:=LbGl`S#kY0]<P! J2TCzU) +4g :wb<xU~@w<cs8L\[O s vyS6:ϷSX?<H ͉OܾE9,(s~V$_=XV|x.2]& ɬNjnʝwddv) g0c0 U0 0U0'U% 0+++0UˋPz[K}:BjZ40U#0Àg`,P |顁010 UUS10U Minnesota10USaint Cloud10U eBureau10UIntegration10Uebureau.com1"0 *H support@ebureau.com r0ŗ00 `HB#!http://www.ebureau.com/ca-crl.pem0 *H vΩWA)pe?Dki?(ŷF'b3`Gܣ;:dBx,[-5r0peJWqBy}j=$j<aYT6{:ZZwa9<&ɞa{{]ҚnǙp'vR9Ht {+Lµ+֧v}u.de*IeȚS} vKRWp,t{&C1Z#H^}c܁Yg100010 UUS10U Minnesota10USaint Cloud10U eBureau10UIntegration10Uebureau.com1"0 *H support@ebureau.comA0 + 0 *H 1 *H 0 *H 1 171205152043Z0# *H 1Q{'6dЈx@1V0 +710010 UUS10U Minnesota10USaint Cloud10U eBureau10UIntegration10Uebureau.com1"0 *H support@ebureau.comA0*H 1010 UUS10U Minnesota10USaint Cloud10U eBureau10UIntegration10Uebureau.com1"0 *H support@ebureau.comA0 *H ^DeebiqKUH82~k6+jw 9~SIM~VW8άTbLT9 ,>xW%IZӠP]Ȑ]3ɓRAX<YǸi)_Ӡv)jʁwjU4jWw(ukO>.XJ$p#C6#_1#v~9ƝJ9_4A0وhelp
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?423F466A-732A-4B04-956E-3CC5F5C47390>
