Date: Fri, 17 Jan 2014 10:08:17 +0100 From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com> To: Sydney Meyer <meyer.sydney@googlemail.com>, <freebsd-virtualization@freebsd.org> Subject: Re: Xen PVHVM with FreeBSD10 Guest Message-ID: <52D8F301.2080701@citrix.com> In-Reply-To: <F672F9F6-7F85-4315-AFA0-EA18527A1893@googlemail.com> References: <9DF57091-9957-452D-8A15-C2267F66ABEC@googlemail.com> <52D81009.6050603@citrix.com> <51F93577-E5A2-4237-9EDD-A89DDA5FC428@gmail.com> <F672F9F6-7F85-4315-AFA0-EA18527A1893@googlemail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On 16/01/14 19:38, Sydney Meyer wrote: > Well then, thanks for the hint.. dmesg shows the following: > > Jan 16 18:22:30 bsd10 kernel: xn0: <Virtual Network Interface> at device/vif/0 on xenbusb_front0 > Jan 16 18:22:30 bsd10 kernel: xn0: Ethernet address: 00:16:3e:df:1b:5a > Jan 16 18:22:30 bsd10 kernel: xenbusb_back0: <Xen Backend Devices> on xenstore0 > Jan 16 18:22:30 bsd10 kernel: xn0: backend features: feature-sg feature-gso-tcp4 > Jan 16 18:22:30 bsd10 kernel: xbd0: 8192MB <Virtual Block Device> at device/vbd/768 on xenbusb_front0 > Jan 16 18:22:30 bsd10 kernel: xbd0: attaching as ada0 > Jan 16 18:22:30 bsd10 kernel: xbd0: features: flush, write_barrier > Jan 16 18:22:30 bsd10 kernel: xbd0: synchronize cache commands enabled. > > Now i did some tests with raw images and the disk performs very well (10-15% less than native throughput). So the problem only manifest itself when using block devices as disk backends? I've done some tests with fio using direct=1 (and a LVM volume as the backend), and it shows that disk writes are slower when using PV drivers instead of the emulated ones. On the other hand disk reads are faster when using the PV drivers. Have you tried if the 9.x series also show the same behaviour? (you will have to compile the custom XENHVM kernel) Roger.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?52D8F301.2080701>