Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 11 Mar 2016 03:15:37 +0300
From:      =?UTF-8?B?0KHQtdGA0LPQtdC5INCc0LDQvNC+0L3QvtCy?= <mrqwer88@gmail.com>
To:        Paul Vixie <paul@redbarn.org>
Cc:        Pavel Odintsov <pavel.odintsov@gmail.com>,  "freebsd-virtualization@freebsd.org" <freebsd-virtualization@freebsd.org>, Sergei Mamonov <mamonov@fastvps.ru>
Subject:   Re: ZFS subvolume support inside Bhyve vm
Message-ID:  <CAG2oxtrnmYpPznFiBfPARG69wBiCWdqn6ch_E64X=M33sVN-uw@mail.gmail.com>
In-Reply-To: <56E206FE.3080000@redbarn.org>
References:  <CALgsdbcXxAnfkKnU9CuOE-pj0sJJpQ7-XFd6R0bFEeKB-maDRw@mail.gmail.com> <56E206FE.3080000@redbarn.org>

next in thread | previous in thread | raw e-mail | index | archive | help
Hello!

Yes - zvols looks awesome. But what driver you use for it? And what about
disk usage overhead in guest?
virtio-blk doesnt support fstrim (ahci-hd support it, but slower? "*At this
point virtio-blk is indeed faster then ahci-hd on high IOPS*").
In linux && kvm we try used virtio-scsi driver with support fstrim, but how
I see it not availble now in 10-2 stable for bhyve.
And I not lonely with this question -
https://lists.freebsd.org/pipermail/freebsd-virtualization/2015-March/003442.html


2016-03-11 2:45 GMT+03:00 Paul Vixie <paul@redbarn.org>:

>
>
> Pavel Odintsov wrote:
>
>> Hello, Dear Community!
>>
>> I would like to ask about plans for this storage engine approach. I like
>> ZFS so much and we are storing about half petabyte of data here.
>>
>> But when we are speaking about vm's we should use zvols or even raw file
>> based images and they are discarding all ZFS benefits.
>>
>
> i use zvols for my bhyves and they have two of the most important zfs
> advantages:
>
> 1. snapshots.
>
> root@mm1:/home/vixie # zfs list|grep fam
>> zroot1/vms/family    55.7G  3.84T  5.34G  -
>> root@mm1:/home/vixie # zfs snap zroot1/vms/family@before
>>
>> [family.redbarn:amd64] touch /var/tmp/after
>>
>> root@mm1:/home/vixie # zfs snap zroot1/vms/family@after
>> root@mm1:/home/vixie # mkdir /mnt/before /mnt/after
>> root@mm1:/home/vixie # zfs clone zroot1/vms/family@before zroot1/before
>> root@mm1:/home/vixie # fsck_ffs -p /dev/zvol/zroot1/beforep2
>> ...
>> /dev/zvol/zroot1/beforep2: 264283 files, 1118905 used, 11575625 free
>> (28697 frags, 1443366 blocks, 0.2% fragmentation)
>> root@mm1:/home/vixie # mount -r /dev/zvol/zroot1/beforep2 /mnt/before
>> root@mm1:/home/vixie # mount -r /dev/zvol/zroot1/beforep2 /mnt/before
>>
>> root@mm1:/home/vixie # zfs clone zroot1/vms/family@after zroot1/after
>> root@mm1:/home/vixie # fsck_ffs -p /dev/zvol/zroot1/afterp2
>> ...
>> /dev/zvol/zroot1/afterp2: 264284 files, 1118905 used, 11575625 free
>> (28697 frags, 1443366 blocks, 0.2% fragmentation)
>> root@mm1:/home/vixie # mount -r /dev/zvol/zroot1/afterp2 /mnt/after
>>
>> root@mm1:/home/vixie # ls -l /mnt/{before,after}/var/tmp/after
>> ls: /mnt/before/var/tmp/after: No such file or directory
>> -rw-rw-r--  1 vixie  wheel  0 Mar 10 22:52 /mnt/after/var/tmp/after
>>
>
> 2. storage redundancy, read caching, and write caching:
>
> root@mm1:/home/vixie # zpool status | tr -d '\t'
>>   pool: zroot1
>>  state: ONLINE
>>   scan: scrub repaired 0 in 2h24m with 0 errors on Thu Mar 10 12:24:13
>> 2016
>> config:
>>
>> NAME                                            STATE     READ WRITE CKSUM
>> zroot1                                          ONLINE       0     0     0
>>   mirror-0                                      ONLINE       0     0     0
>>     gptid/2427e651-d9cc-11e3-b8a1-002590ea750a  ONLINE       0     0     0
>>     gptid/250b0f01-d9cc-11e3-b8a1-002590ea750a  ONLINE       0     0     0
>>   mirror-1                                      ONLINE       0     0     0
>>     gptid/d35bb315-da08-11e3-b17f-002590ea750a  ONLINE       0     0     0
>>     gptid/d85ad8be-da08-11e3-b17f-002590ea750a  ONLINE       0     0     0
>> logs
>>   mirror-2                                      ONLINE       0     0     0
>>     ada0s1                                      ONLINE       0     0     0
>>     ada1s1                                      ONLINE       0     0     0
>> cache
>>   ada0s2                                        ONLINE       0     0     0
>>   ada1s2                                        ONLINE       0     0     0
>>
>> errors: No known data errors
>>
>
> so while i'd love to chroot a bhyve driver to some place in the middle of
> the host's file system and then pass VFS right on through, more or less the
> way mount_nullfs does, i am pretty comfortable with zvol UFS, and i think
> it's misleading to say that zvol UFS lacks all ZFS benefits.
>
> --
> P Vixie
>
> _______________________________________________
> freebsd-virtualization@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
> To unsubscribe, send any mail to "
> freebsd-virtualization-unsubscribe@freebsd.org"
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAG2oxtrnmYpPznFiBfPARG69wBiCWdqn6ch_E64X=M33sVN-uw>