Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 18 Oct 2012 15:47:23 +0200
From:      Borja Marcos <borjam@sarenet.es>
To:        freebsd-xen@freebsd.org
Subject:   Fun with FreeBSD 9.1 and Xen Cloud 1.6
Message-ID:  <6FF8A9C7-CC9E-40B6-BEBC-CB2ABE91967C@sarenet.es>

next in thread | raw e-mail | index | archive | help

Hello,

We've been doing some tests with FreeBSD and Xen Cloud 1.6 beta, with =
some surprising results.

Our test bed, for now, is a single Xen Cloud host, and the storage =
backend, for now connected through one Gigabit Ethernet interface.  We =
are using NFS, tweaking a little the Xen Cloud scripts so that it uses =
NFSv4. It works.

The storage server has a ZFS pool of 7 SSD disks, configured as a raidz2 =
group. The disks are connected to the SAS backplane of a Dell R410 =
server, wired to a Dell H200 flashed  so that it won't insist on =
creating RAID volumes: it works as a HBA. The OS is FreeBSD 9.1-RC2.

We  have found an enormous difference in performance for the FreeBSD =
virtual machines. The effect of tweaking the "sync" property for the =
dataset in the storage server is dramatic for the XENHVM kernel, less =
marked for the GENERIC kernel.=20

The benchmark we have used is Bonnie++, and the write performance (what =
it calls "intelligent writing") shows this dramatic difference:


                                       GENERIC                  XENHVM   =
                 =20

zfs sync=3Dstandard             ~ 30 MB/s                     ~6 MB/s

zfs sync=3Ddisabled              ~55 MB/s                   ~80 MB/s


The read performance is irrelevant, it saturates the 1 Gbps interface in =
both cases, as expected.

The surprising results is: we get a terrible performance with the XENHVM =
kernel when we leave the default setting of "sync=3Dstandard" for the =
ZFS dataset. This setting, even with a performance hit, is preferred to =
guaranteee the integrity of the virtual disk images in case of a crash =
or outage. And when this happens, of course it ruins the performance for =
all the other virtual machines, in case we have several running. Seems =
it's insisting on flushing the ZIL constantly.

Any ideas?=20

The GENERIC kernel is detecting the disk like thsi:

atapci0: <Intel PIIX3 WDMA2 controller> port =
0x1f0-0x1f7,0x3f6,0x170-0x177,0x376,0xc220-0xc22f at device 1.1 on pci0
ata0: <ATA channel> at channel 0 on atapci0
ata1: <ATA channel> at channel 1 on atapci0
ada0 at ata0 bus 0 scbus0 target 0 lun 0
ada0: <QEMU HARDDISK 0.10.2> ATA-7 device
ada0: 16.700MB/s transfers (WDMA2, PIO 8192bytes)
ada0: 20480MB (41943040 512 byte sectors: 16H 63S/T 16383C)
ada0: Previously was known as ad0
SMP: AP CPU #1 Launched!


and the XENHVM kernel detects this:
xbd0: 20480MB <Virtual Block Device> at device/vbd/768 on xenbusb_front0
xbd0: attaching as ad0
GEOM: ad0s1: geometry does not match label (16h,63s !=3D 255h,63s).

(I tried patching /usr/src/sys/dev/xen/blkfront/blkfront.c to force it =
to attach the new ATA driver instead of the old one, but the result is =
still the same.=20









Borja.




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?6FF8A9C7-CC9E-40B6-BEBC-CB2ABE91967C>