From owner-freebsd-virtualization@FreeBSD.ORG Thu Jan 16 18:38:06 2014 Return-Path: Delivered-To: freebsd-virtualization@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BFFA1A1D for ; Thu, 16 Jan 2014 18:38:06 +0000 (UTC) Received: from mail-wi0-x22f.google.com (mail-wi0-x22f.google.com [IPv6:2a00:1450:400c:c05::22f]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 526321A66 for ; Thu, 16 Jan 2014 18:38:06 +0000 (UTC) Received: by mail-wi0-f175.google.com with SMTP id hr1so4173824wib.14 for ; Thu, 16 Jan 2014 10:38:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlemail.com; s=20120113; h=content-type:mime-version:subject:from:in-reply-to:date :content-transfer-encoding:message-id:references:to; bh=BEdcnUrGG18A7TD1ieGbOC8shHFT7kz4U+wJKF5mK8o=; b=tynF3gPpgU0M0JzwX98lfk3Z/5N13qfhKZjy8OAcIRpNIy1G6jYJababs6+tJvhsIF 7aUM/qn1GMICSRQrYAWbwSnzUk7fUSVlPNMhVhcX8gBOHcpoMXNZVFiWuQiegKsKKHtB SbYxX1SNb3OZ2duFJowIf2QWYXYPE9QeejUVs0xrLSDViCMBZPxXqBKYOeDD6Z+RHdgR SCyCe6kNYhrcgnx2rtpgvammzCB2HGwirdNmjb9840MPOeiv9pX5ylucT9pNnBCb1hSl J/ZhibKSazD9CcByAwQHdFjTZXvHlQb9xjs9LNzHqEVmAvG8kElf2FolwZR4t3BKgm3g wzgw== X-Received: by 10.194.63.228 with SMTP id j4mr10157463wjs.34.1389897484058; Thu, 16 Jan 2014 10:38:04 -0800 (PST) Received: from [10.0.1.109] (21.199-241-81.adsl-dyn.isp.belgacom.be. [81.241.199.21]) by mx.google.com with ESMTPSA id w1sm4396669wix.1.2014.01.16.10.38.03 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 16 Jan 2014 10:38:03 -0800 (PST) Content-Type: text/plain; charset=iso-8859-1 Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\)) Subject: Re: Xen PVHVM with FreeBSD10 Guest From: Sydney Meyer In-Reply-To: <51F93577-E5A2-4237-9EDD-A89DDA5FC428@gmail.com> Date: Thu, 16 Jan 2014 19:38:02 +0100 Content-Transfer-Encoding: quoted-printable Message-Id: References: <9DF57091-9957-452D-8A15-C2267F66ABEC@googlemail.com> <52D81009.6050603@citrix.com> <51F93577-E5A2-4237-9EDD-A89DDA5FC428@gmail.com> To: freebsd-virtualization@freebsd.org X-Mailer: Apple Mail (2.1827) X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 16 Jan 2014 18:38:06 -0000 Well then, thanks for the hint.. dmesg shows the following: Jan 16 18:22:30 bsd10 kernel: xn0: at = device/vif/0 on xenbusb_front0 Jan 16 18:22:30 bsd10 kernel: xn0: Ethernet address: 00:16:3e:df:1b:5a Jan 16 18:22:30 bsd10 kernel: xenbusb_back0: on = xenstore0 Jan 16 18:22:30 bsd10 kernel: xn0: backend features: feature-sg = feature-gso-tcp4 Jan 16 18:22:30 bsd10 kernel: xbd0: 8192MB at = device/vbd/768 on xenbusb_front0 Jan 16 18:22:30 bsd10 kernel: xbd0: attaching as ada0 Jan 16 18:22:30 bsd10 kernel: xbd0: features: flush, write_barrier Jan 16 18:22:30 bsd10 kernel: xbd0: synchronize cache commands enabled. Now i did some tests with raw images and the disk performs very well = (10-15% less than native throughput). Is this a known problem or maybe specific to this constellation? The Test System is running on a Haswell Intel Core i3 CPU (4310T) with = an Intel H81 Chipset. Cheers, Sydney. On 16.01.2014, at 18:06, Sydney Meyer = wrote: > No, the VMs are running on local LVM Volumes as Disk Backend. >=20 >> On 16 Jan 2014, at 17:59, Roger Pau Monn=E9 = wrote: >>=20 >>> On 16/01/14 17:41, Sydney Meyer wrote: >>> Hello everyone, >>>=20 >>> does someone know how to check if the paravirtualized I/O drivers = from Xen are loaded/working in FreeBSD 10? To my understanding it isn't = necessary anymore to compile a custom kernel with PVHVM enabled, right? = In /var/log/messages/ I can see the XN* and XBD* devices and the network = performance is very good (saturated Gb) compared to qemu-emulated, but = the disk performance is not as well, infact, it is even slower than = emulated with qemu (0.10.2). I did some test with dd and bonnie++, = turned caching on the host off and tried to directly sync to disk, = PVonHVM is averagely 15-20 % slower than QEMU at throughput. Both VM's = are running on the same host on a Xen 4.1 Hypervisor with QEMU 0.10.2 on = a Debian Linux 3.2 Kernel as Dom0. >>=20 >> PV drivers will be used automatically if Xen is detected. You should = see >> something like this on dmesg: >>=20 >> xn0: at device/vif/0 on xenbusb_front0 >> xn0: Ethernet address: 00:16:3e:47:d4:52 >> xenbusb_back0: on xenstore0 >> xn0: backend features: feature-sg feature-gso-tcp4 >> xbd0: 20480MB at device/vbd/51712 on = xenbusb_front0 >> xbd0: features: flush, write_barrier >> xbd0: synchronize cache commands enabled. >>=20 >> Are you using a raw file as a disk? >>=20 >> Roger. >>=20