Date: Fri, 17 Feb 2017 12:29:27 +0100 From: Andrea Venturoli <ml@netfence.it> To: freebsd-virtualization@freebsd.org, Harry Schmalzbauer <freebsd@omnilan.de> Subject: Re: Status of bhyve Message-ID: <d4581278-59d8-d270-ad49-11d81868e4fc@netfence.it> In-Reply-To: <607fc3c1-5546-dbce-488b-983163ff1e98@netfence.it> References: <607fc3c1-5546-dbce-488b-983163ff1e98@netfence.it>
next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, 16 Feb 2017 10:46:34 +0100, Harry Schmalzbauer wrote: > Hello, > it depends on the features you need. Not much, really. Running SQL Server Express (for now) with decent performance. > · virtio-blk and jumbo frames (e1000 works with jumbo frames but > performance is not comparaable with ESXi e1000(e)) > https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=215737 I don't think the underlying network equipment will support Jumbo Frames :( > · PCI-Passthru is very picky. If you have a card with BAR memorysize < | > != pagesize, byhve(4) won't accept it. > > · device(9) as block storage backend (virtio-blk, ahci-hd) doesn't work > if you use any PCI-passthru device > https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=215740 I don't think I'd need PCI passthrough (I'm fine with a disk and a network card). > · virtio-blk isn't virtio-win (Windows driver) compatible, guest will crash! > > · virtio-net doesn't work with latest Windows drivers, which is not a > bhyve(4) problem as far as I can tell. Version 0.1.118 works, newer ones > are known to have problems on other hypervisors too. Good to know. > · See if_bridge(4) for some limitations (all members need to have > exactly the same MTU, uplink gets checksum offloading disabled). > Generally, soft-switching capabilities ar not comparable with those of > ESXi, especially not the performace (outside netmap world). This is a good point. :( > Other than that, it's rock solid for me > ... >> How well does it run Windows? >> Would I better run W7 instead of W10 (or the other way round)? Fine. >> Should I use a dedicated disk (or disk mirror) for better speed? >> Or should I use a dedicated partition on the host's disk/disk mirror? >> Will a ZFS volume perform as good as a partition? > > ZVOL is the best option offering great performance (depending on your > pool setup of yourse) as long as there is the PCI-passthru bug mentioned > above. Thanks again. bye av.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?d4581278-59d8-d270-ad49-11d81868e4fc>