Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 14 Nov 2022 21:25:36 +0000
From:      bugzilla-noreply@freebsd.org
To:        virtualization@FreeBSD.org
Subject:   [Bug 267769] Bhyve core dump on suspend/resume of virtio-scsi device
Message-ID:  <bug-267769-27103-Hoqmi7Fyaz@https.bugs.freebsd.org/bugzilla/>
In-Reply-To: <bug-267769-27103@https.bugs.freebsd.org/bugzilla/>
References:  <bug-267769-27103@https.bugs.freebsd.org/bugzilla/>

next in thread | previous in thread | raw e-mail | index | archive | help
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D267769

--- Comment #2 from dan@sunsaturn.com ---
Well according to a lot of internet posts , virtio-scsi was designed to rep=
lace
virtio-blk. It is very convenient to pass in 10 devices to a guest with a 1
liner to bhyve with a cam ioctl device.=20

Also FreeBSD cannot fix vfs.zfs.vol.recursive because of the recursive dead
lock issues, so only way currently to mount a ZFS guest to a host is using
iscsi /dev/da* devices.

While we can keep using virtio-blk devices the issue arises on new install =
of
guests such as linux guests for example where creating virtio-blk devices
creates devices such as /dev/vda*. When passing virtio-scsi, the devices are
/dev/sda*, so this complicates issues greatly.

While FreeBSD guests do not suffer issues of being able to label GPT partit=
ions
in /etc/fstab, Linux guests hardcode all their GPT labels to physical disks
like /dev/vda* which would make them unbootable switching from virtio-blk to
virtio-scsi down the road.

So I am not sure what is best course of action here, install all guests as
virtio-scsi in preparation of suspend/resume functionality, or go with
virtio-blk and have to reinstall all the guests at some point.

Dan.

--=20
You are receiving this mail because:
You are the assignee for the bug.=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?bug-267769-27103-Hoqmi7Fyaz>