From owner-freebsd-virtualization@freebsd.org Thu Mar 10 23:45:08 2016 Return-Path: Delivered-To: freebsd-virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id DF8C5ACBA24 for ; Thu, 10 Mar 2016 23:45:07 +0000 (UTC) (envelope-from paul@redbarn.org) Received: from family.redbarn.org (family.redbarn.org [IPv6:2001:559:8000:cd::5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id C31BAC25 for ; Thu, 10 Mar 2016 23:45:07 +0000 (UTC) (envelope-from paul@redbarn.org) Received: from [IPv6:2601:646:c202:2900:f531:9e2:359c:c065] (unknown [IPv6:2601:646:c202:2900:f531:9e2:359c:c065]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by family.redbarn.org (Postfix) with ESMTPSA id DB4AB18208; Thu, 10 Mar 2016 23:45:06 +0000 (UTC) Message-ID: <56E206FE.3080000@redbarn.org> Date: Thu, 10 Mar 2016 15:45:02 -0800 From: Paul Vixie User-Agent: Postbox 4.0.8 (Windows/20151105) MIME-Version: 1.0 To: Pavel Odintsov CC: "freebsd-virtualization@freebsd.org" , Sergei Mamonov Subject: Re: ZFS subvolume support inside Bhyve vm References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 10 Mar 2016 23:45:08 -0000 Pavel Odintsov wrote: > Hello, Dear Community! > > I would like to ask about plans for this storage engine approach. I like > ZFS so much and we are storing about half petabyte of data here. > > But when we are speaking about vm's we should use zvols or even raw file > based images and they are discarding all ZFS benefits. i use zvols for my bhyves and they have two of the most important zfs advantages: 1. snapshots. > root@mm1:/home/vixie # zfs list|grep fam > zroot1/vms/family 55.7G 3.84T 5.34G - > root@mm1:/home/vixie # zfs snap zroot1/vms/family@before > > [family.redbarn:amd64] touch /var/tmp/after > > root@mm1:/home/vixie # zfs snap zroot1/vms/family@after > root@mm1:/home/vixie # mkdir /mnt/before /mnt/after > root@mm1:/home/vixie # zfs clone zroot1/vms/family@before zroot1/before > root@mm1:/home/vixie # fsck_ffs -p /dev/zvol/zroot1/beforep2 > ... > /dev/zvol/zroot1/beforep2: 264283 files, 1118905 used, 11575625 free (28697 frags, 1443366 blocks, 0.2% fragmentation) > root@mm1:/home/vixie # mount -r /dev/zvol/zroot1/beforep2 /mnt/before > root@mm1:/home/vixie # mount -r /dev/zvol/zroot1/beforep2 /mnt/before > > root@mm1:/home/vixie # zfs clone zroot1/vms/family@after zroot1/after > root@mm1:/home/vixie # fsck_ffs -p /dev/zvol/zroot1/afterp2 > ... > /dev/zvol/zroot1/afterp2: 264284 files, 1118905 used, 11575625 free (28697 frags, 1443366 blocks, 0.2% fragmentation) > root@mm1:/home/vixie # mount -r /dev/zvol/zroot1/afterp2 /mnt/after > > root@mm1:/home/vixie # ls -l /mnt/{before,after}/var/tmp/after > ls: /mnt/before/var/tmp/after: No such file or directory > -rw-rw-r-- 1 vixie wheel 0 Mar 10 22:52 /mnt/after/var/tmp/after 2. storage redundancy, read caching, and write caching: > root@mm1:/home/vixie # zpool status | tr -d '\t' > pool: zroot1 > state: ONLINE > scan: scrub repaired 0 in 2h24m with 0 errors on Thu Mar 10 12:24:13 2016 > config: > > NAME STATE READ WRITE CKSUM > zroot1 ONLINE 0 0 0 > mirror-0 ONLINE 0 0 0 > gptid/2427e651-d9cc-11e3-b8a1-002590ea750a ONLINE 0 0 0 > gptid/250b0f01-d9cc-11e3-b8a1-002590ea750a ONLINE 0 0 0 > mirror-1 ONLINE 0 0 0 > gptid/d35bb315-da08-11e3-b17f-002590ea750a ONLINE 0 0 0 > gptid/d85ad8be-da08-11e3-b17f-002590ea750a ONLINE 0 0 0 > logs > mirror-2 ONLINE 0 0 0 > ada0s1 ONLINE 0 0 0 > ada1s1 ONLINE 0 0 0 > cache > ada0s2 ONLINE 0 0 0 > ada1s2 ONLINE 0 0 0 > > errors: No known data errors so while i'd love to chroot a bhyve driver to some place in the middle of the host's file system and then pass VFS right on through, more or less the way mount_nullfs does, i am pretty comfortable with zvol UFS, and i think it's misleading to say that zvol UFS lacks all ZFS benefits. -- P Vixie