Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 16 Nov 2016 18:17:47 +0100
From:      Jan Bramkamp <crest@rlwinm.de>
To:        freebsd-emulation@freebsd.org
Subject:   Re: bhyve: zvols for guest disk - yes or no?
Message-ID:  <b775f684-98a2-b929-2b13-9753c95fd4f2@rlwinm.de>
In-Reply-To: <D991D88D-1327-4580-B6E5-2D59338147C0@punkt.de>
References:  <D991D88D-1327-4580-B6E5-2D59338147C0@punkt.de>

next in thread | previous in thread | raw e-mail | index | archive | help


On 16/11/2016 15:47, Patrick M. Hausen wrote:
> Hi, all,
>
> we are just starting a project that will run a couple
> of Ubuntu guests on top of bhyve instead of ESXi
> that we used in the past.
>
> As far as I could find out, more or less all bhyve
> manager/wrapper tools use zvols as the backing
> store for "raw" guest disk images.
>
> I looked at
>
> 	* chyves
> 	* iohyve
> 	* vm-bhyve
>
> So far so good. Yet, this blog article has some very
> valid (IMHO) points against using them:
>
> http://jrs-s.net/2016/06/16/psa-snapshots-are-better-than-zvols

Afaik this is only a problem if you create a reservation for your ZVOLs. 
By default ZFS does create ZVOLs with reservations matching their size. 
The problem is that there is no way to signal to a VM that the 
overcommited SAN just ran out of space and your stupid operator is 
looking for ways to increase free space *NOW* and the VM should just 
retry. Instead the VM gets a general write error. To keep this promise 
ZFS has to reserve enough space to change each bit in the ZVOL if you 
take a snapshot of a ZVOL. Other datasets are created without 
reservations by default because the POSIX file system API can report 
ENOSPC if the file system ran out of space. Of course you get the same 
problem if you storage VM images in a file system there is just no sane 
default setting protecting you, because you just outsmarted your poor 
operating system.

> Another thing I'm pondering is: wouldn't it be better to
> run on UFS so you can dedicate as much memory
> as possible to VMs?

ZFS isn't a memory eating boogie man. Yes ZFS uses a lot of memory and 
requires more memory than UFS for stable performance, but you get a lot 
in return:

  * End to end checksumming
  * Snapshots
  * Remote replication of snaphosts
  * Painless volume management
  * ...

> So I'm a bit puzzled now on what to do. Any opinions,
> experiences, war stories to share?

ZFS saved my bacon more than once. Twice it detected and corrected 
datacorruption on dying hardware in time. With a normal FS both cases 
would have turned into bitrot spreading into the backups until it's too 
late.

Without ZFS you would require a reliable hardware RAID controller (if 
such a magical creature exists) instead (or build a software RAID1+0 
from gmirror and gstripe). IMO money is better invested into more RAM 
keeping ZFS and the admin happy.

-- Jan Bramkamp



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?b775f684-98a2-b929-2b13-9753c95fd4f2>