Date: Tue, 5 Dec 2017 10:50:43 -0600 From: Dustin Wenz <dustinwenz@ebureau.com> To: "Rodney W. Grimes" <freebsd-rwg@pdx.rh.CN85.dnsmgr.net> Cc: Paul Vixie <paul@redbarn.org>, FreeBSD virtualization <freebsd-virtualization@freebsd.org> Subject: Re: Storage overhead on zvols Message-ID: <3AABC35B-FA6C-4DC6-B70C-F6D1326A7D25@ebureau.com> In-Reply-To: <201712051641.vB5GfR5I052310@pdx.rh.CN85.dnsmgr.net> References: <201712051641.vB5GfR5I052310@pdx.rh.CN85.dnsmgr.net>
index | next in thread | previous in thread | raw e-mail
[-- Attachment #1 --]
> On Dec 5, 2017, at 10:41 AM, Rodney W. Grimes <freebsd-rwg@pdx.rh.CN85.dnsmgr.net> wrote:
>
>>
>>
>> Dustin Wenz wrote:
>>> I'm not using ZFS in my VMs for data integrity (the host already
>>> provides that); it's mainly for the easy creation and management of
>>> filesystems, and the ability to do snapshots for rollback and
>>> replication.
>>
>> snapshot and replication works fine on the host, acting on the zvol.
>
> I suspect he is snapshotting and doing send/recvs of something
> much less than the zvol, probably some datasetbs, maybe boot
> envorinments, a snapshot of the whole zvol is ok if your managing
> data at the VM level, not so good if you got lots of stuff going
> on inside the VM.
Exactly, it's useful to have control of each filesystem discretely.
>>> Some of my deployments have hundreds of filesystems in
>>> an organized hierarchy, with delegated permissions and automated
>>> snapshots, send/recvs, and clones for various operations.
>>
>> what kind of zpool do you use in the guest, to avoid unwanted additional
>> redundancy?
>
> Just a simple stripe of 1 device would be my guess, though your
> still gona have metadata redundancy.
Also correct; just using the zvol virtual device as a single-disk pool.
>>
>> did you benchmark the space or time efficiency of ZFS vs. UFS?
>>
>> in some bsd related meeting this year i asked allan jude for a bhyve
>> level null mount, so that we could access at / inside the guest some
>> subtree of the host, and avoid block devices and file systems
>> altogether. right now i have to use nfs for that, which is irritating.
>
> This is not as simple as it seems, remember bhyve is just presenting
> a hardware environment, hardware environments dont have a file system
> concept per se, unlike jails which are providing a software environment.
>
> In effect what your asking for is what NFS does, so use NFS and get
> over the fact that this is the way to get what you want. Sure you
> could implement a virt-vfs but I wonder how close the spec of that
> would be to the spec of NFS.
>
> Or maybe thats the answer, implement virt-vfs as a more effecient way
> to transport nfs calls in and out of the guest.
I've not done any deliberate comparisons for latency or throughput. What I've decided to virtualize does not have any exceptional performance requirements. If I need the best possible IO, I would lean toward using jails instead of a hypervisor.
- .Dustin
[-- Attachment #2 --]
0 *H
010 + 0 *H
00A0
*H
010 UUS10U Minnesota10USaint Cloud10U
eBureau10UIntegration10Uebureau.com1"0 *H
support@ebureau.com0
170505161615Z
270503161615Z0J10 UUS10UDustin Wenz1%0# *H
dustinwenz@ebureau.com0"0
*H
0
rW{aQFb~˞C .h`h]:=LbGl`S#kY0]<P!
J2TCzU)
+4g :wb<xU~@w<cs8L\[O
s vyS6:ϷSX?<H ͉OܾE9,(s~V$_=XV|x.2]&
ɬNjnʝwddv) g0c0 U0 0U0'U% 0+++0UˋPz[K}:BjZ40U#0Àg`,P |顁010 UUS10U Minnesota10USaint Cloud10U
eBureau10UIntegration10Uebureau.com1"0 *H
support@ebureau.com r0ŗ00 `HB#!http://www.ebureau.com/ca-crl.pem0
*H
vΩWA)pe?Dki?(ŷF'b3`Gܣ;:dBx,[-5r0peJWqBy}j=$j<aYT6{:ZZwa9<&ɞa{{]ҚnǙp'vR9Ht
{+Lµ+֧v}u.de*IeȚS} vKRWp,t{&C1Z#H^}c܁Yg100010 UUS10U Minnesota10USaint Cloud10U
eBureau10UIntegration10Uebureau.com1"0 *H
support@ebureau.comA0 + 0 *H
1 *H
0 *H
1
171205165043Z0# *H
1jGs@ 3sͪ쑴0 +710010 UUS10U Minnesota10USaint Cloud10U
eBureau10UIntegration10Uebureau.com1"0 *H
support@ebureau.comA0*H
1010 UUS10U Minnesota10USaint Cloud10U
eBureau10UIntegration10Uebureau.com1"0 *H
support@ebureau.comA0
*H
Ve>-oknMSؓ ;BV`)t\smsV(/]\'Xyj/-aJ~2Cs
2{(
/
W/
AJ-zU AxxNoKֶz?(z5N6vߘ#m\J͘VĂdCSk,}sÄLt%Dvn7_Ȍ R &O;/Ve
help
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3AABC35B-FA6C-4DC6-B70C-F6D1326A7D25>
