From owner-freebsd-xen@FreeBSD.ORG Thu Oct 18 13:54:46 2012 Return-Path: Delivered-To: freebsd-xen@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 64C4FDA7 for ; Thu, 18 Oct 2012 13:54:46 +0000 (UTC) (envelope-from borjam@sarenet.es) Received: from proxypop04b.sare.net (proxypop04b.sare.net [194.30.0.79]) by mx1.freebsd.org (Postfix) with ESMTP id 1D69D8FC19 for ; Thu, 18 Oct 2012 13:54:45 +0000 (UTC) Received: from [172.16.2.2] (izaro.sarenet.es [192.148.167.11]) by proxypop04.sare.net (Postfix) with ESMTPSA id AB21E9DC688 for ; Thu, 18 Oct 2012 15:46:38 +0200 (CEST) From: Borja Marcos Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Subject: Fun with FreeBSD 9.1 and Xen Cloud 1.6 Date: Thu, 18 Oct 2012 15:47:23 +0200 Message-Id: <6FF8A9C7-CC9E-40B6-BEBC-CB2ABE91967C@sarenet.es> To: freebsd-xen@freebsd.org Mime-Version: 1.0 (Apple Message framework v1085) X-Mailer: Apple Mail (2.1085) X-BeenThere: freebsd-xen@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Discussion of the freebsd port to xen - implementation and usage List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 18 Oct 2012 13:54:46 -0000 Hello, We've been doing some tests with FreeBSD and Xen Cloud 1.6 beta, with = some surprising results. Our test bed, for now, is a single Xen Cloud host, and the storage = backend, for now connected through one Gigabit Ethernet interface. We = are using NFS, tweaking a little the Xen Cloud scripts so that it uses = NFSv4. It works. The storage server has a ZFS pool of 7 SSD disks, configured as a raidz2 = group. The disks are connected to the SAS backplane of a Dell R410 = server, wired to a Dell H200 flashed so that it won't insist on = creating RAID volumes: it works as a HBA. The OS is FreeBSD 9.1-RC2. We have found an enormous difference in performance for the FreeBSD = virtual machines. The effect of tweaking the "sync" property for the = dataset in the storage server is dramatic for the XENHVM kernel, less = marked for the GENERIC kernel.=20 The benchmark we have used is Bonnie++, and the write performance (what = it calls "intelligent writing") shows this dramatic difference: GENERIC XENHVM = =20 zfs sync=3Dstandard ~ 30 MB/s ~6 MB/s zfs sync=3Ddisabled ~55 MB/s ~80 MB/s The read performance is irrelevant, it saturates the 1 Gbps interface in = both cases, as expected. The surprising results is: we get a terrible performance with the XENHVM = kernel when we leave the default setting of "sync=3Dstandard" for the = ZFS dataset. This setting, even with a performance hit, is preferred to = guaranteee the integrity of the virtual disk images in case of a crash = or outage. And when this happens, of course it ruins the performance for = all the other virtual machines, in case we have several running. Seems = it's insisting on flushing the ZIL constantly. Any ideas?=20 The GENERIC kernel is detecting the disk like thsi: atapci0: port = 0x1f0-0x1f7,0x3f6,0x170-0x177,0x376,0xc220-0xc22f at device 1.1 on pci0 ata0: at channel 0 on atapci0 ata1: at channel 1 on atapci0 ada0 at ata0 bus 0 scbus0 target 0 lun 0 ada0: ATA-7 device ada0: 16.700MB/s transfers (WDMA2, PIO 8192bytes) ada0: 20480MB (41943040 512 byte sectors: 16H 63S/T 16383C) ada0: Previously was known as ad0 SMP: AP CPU #1 Launched! and the XENHVM kernel detects this: xbd0: 20480MB at device/vbd/768 on xenbusb_front0 xbd0: attaching as ad0 GEOM: ad0s1: geometry does not match label (16h,63s !=3D 255h,63s). (I tried patching /usr/src/sys/dev/xen/blkfront/blkfront.c to force it = to attach the new ATA driver instead of the old one, but the result is = still the same.=20 Borja.