Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 16 Jan 2014 19:38:02 +0100
From:      Sydney Meyer <meyer.sydney@googlemail.com>
To:        freebsd-virtualization@freebsd.org
Subject:   Re: Xen PVHVM with FreeBSD10 Guest
Message-ID:  <F672F9F6-7F85-4315-AFA0-EA18527A1893@googlemail.com>
In-Reply-To: <51F93577-E5A2-4237-9EDD-A89DDA5FC428@gmail.com>
References:  <9DF57091-9957-452D-8A15-C2267F66ABEC@googlemail.com> <52D81009.6050603@citrix.com> <51F93577-E5A2-4237-9EDD-A89DDA5FC428@gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Well then, thanks for the hint.. dmesg shows the following:

Jan 16 18:22:30 bsd10 kernel: xn0: <Virtual Network Interface> at =
device/vif/0 on xenbusb_front0
Jan 16 18:22:30 bsd10 kernel: xn0: Ethernet address: 00:16:3e:df:1b:5a
Jan 16 18:22:30 bsd10 kernel: xenbusb_back0: <Xen Backend Devices> on =
xenstore0
Jan 16 18:22:30 bsd10 kernel: xn0: backend features: feature-sg =
feature-gso-tcp4
Jan 16 18:22:30 bsd10 kernel: xbd0: 8192MB <Virtual Block Device> at =
device/vbd/768 on xenbusb_front0
Jan 16 18:22:30 bsd10 kernel: xbd0: attaching as ada0
Jan 16 18:22:30 bsd10 kernel: xbd0: features: flush, write_barrier
Jan 16 18:22:30 bsd10 kernel: xbd0: synchronize cache commands enabled.

Now i did some tests with raw images and the disk performs very well =
(10-15% less than native throughput).

Is this a known problem or maybe specific to this constellation?

The Test System is running on a Haswell Intel Core i3 CPU (4310T) with =
an Intel H81 Chipset.

Cheers,
Sydney.

On 16.01.2014, at 18:06, Sydney Meyer <meyer.sydney@googlemail.com> =
wrote:

> No, the VMs are running on local LVM Volumes as Disk Backend.
>=20
>> On 16 Jan 2014, at 17:59, Roger Pau Monn=E9 <roger.pau@citrix.com> =
wrote:
>>=20
>>> On 16/01/14 17:41, Sydney Meyer wrote:
>>> Hello everyone,
>>>=20
>>> does someone know how to check if the paravirtualized I/O drivers =
from Xen are loaded/working in FreeBSD 10? To my understanding it isn't =
necessary anymore to compile a custom kernel with PVHVM enabled, right? =
In /var/log/messages/ I can see the XN* and XBD* devices and the network =
performance is very good (saturated Gb) compared to qemu-emulated, but =
the disk performance is not as well, infact, it is even slower than =
emulated with qemu (0.10.2). I did some test with dd and bonnie++, =
turned caching on the host off and tried to directly sync to disk, =
PVonHVM is averagely 15-20 % slower than QEMU at throughput. Both VM's =
are running on the same host on a Xen 4.1 Hypervisor with QEMU 0.10.2 on =
a Debian Linux 3.2 Kernel as Dom0.
>>=20
>> PV drivers will be used automatically if Xen is detected. You should =
see
>> something like this on dmesg:
>>=20
>> xn0: <Virtual Network Interface> at device/vif/0 on xenbusb_front0
>> xn0: Ethernet address: 00:16:3e:47:d4:52
>> xenbusb_back0: <Xen Backend Devices> on xenstore0
>> xn0: backend features: feature-sg feature-gso-tcp4
>> xbd0: 20480MB <Virtual Block Device> at device/vbd/51712 on =
xenbusb_front0
>> xbd0: features: flush, write_barrier
>> xbd0: synchronize cache commands enabled.
>>=20
>> Are you using a raw file as a disk?
>>=20
>> Roger.
>>=20




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?F672F9F6-7F85-4315-AFA0-EA18527A1893>