From owner-freebsd-xen@FreeBSD.ORG Sun Feb 20 01:38:53 2011 Return-Path: Delivered-To: freebsd-xen@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 573331065672 for ; Sun, 20 Feb 2011 01:38:53 +0000 (UTC) (envelope-from ivoras@gmail.com) Received: from mail-qw0-f54.google.com (mail-qw0-f54.google.com [209.85.216.54]) by mx1.freebsd.org (Postfix) with ESMTP id 0B67E8FC0A for ; Sun, 20 Feb 2011 01:38:52 +0000 (UTC) Received: by qwj9 with SMTP id 9so4104632qwj.13 for ; Sat, 19 Feb 2011 17:38:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:sender:in-reply-to:references:from :date:x-google-sender-auth:message-id:subject:to:cc:content-type; bh=zXf+pGCPnZMRFY43kPLWz3oRaYSegIfIEAXfbPByLcE=; b=WqOl6Bn7f91hRhkltcU5Diy6Bu6qpTZAim0gOEBVtCkyzQZugu2av35QZnasFN070+ itDnqx0pqxvIkLzNtqeJeANGIHgTNddb/iUHJyWZLEbQ0J0J7eGjKxVcsm3PNvdm8y7K xdZdYM9ZaVz9MtulctxzN4HLZWYgJaaam7yds= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:from:date :x-google-sender-auth:message-id:subject:to:cc:content-type; b=dD7tE/C6apMcbnhH/2NtQIJx2O1AHOukfswAr0K3ARoLaahcLmHNCX8CQMrf68e/CE OMTO374wDGYUo8qjD7sxwqJKS8GDjdV3gIO5CoE7sIYOku7t/w7MTPL21wvfB9EpJ45X 3TVOiPD4oBEdRTC2qCzQOW82PPQoVrbLxyfpo= Received: by 10.229.181.78 with SMTP id bx14mr1707070qcb.296.1298165932178; Sat, 19 Feb 2011 17:38:52 -0800 (PST) MIME-Version: 1.0 Sender: ivoras@gmail.com Received: by 10.229.188.140 with HTTP; Sat, 19 Feb 2011 17:38:12 -0800 (PST) In-Reply-To: References: <4D5E7CF7.8020209@barafranca.com> <4D5E92F3.6050709@barafranca.com> From: Ivan Voras Date: Sun, 20 Feb 2011 02:38:12 +0100 X-Google-Sender-Auth: QzDs8zeljgEXU9q-GgFmvZUj9Ok Message-ID: To: Hugo Silva Content-Type: text/plain; charset=UTF-8 Cc: freebsd-xen@freebsd.org Subject: Re: XenServer? X-BeenThere: freebsd-xen@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussion of the freebsd port to xen - implementation and usage List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 20 Feb 2011 01:38:53 -0000 On 19 February 2011 00:25, Ivan Voras wrote: > No such luck here; I've just tried an amd64 machine (8-STABLE from > today) in a new installation of XenServer 5.6 and while the GENERIC > kernel works stable enough, the XENHVM kernel produces all kinds of > timer-related problems, accompanied by messages like: > > Feb 18 23:20:03 xbsd kernel: calcru: runtime went backwards from > 28669021884109 usec to 22622950 usec for pid 11 (idle) I've tried comparing the performance of GENERIC and XENHVM kernels on this machine with unixbench and it points to GENERIC being faster in everything - though I don't know if this is an artifact of bad timer behaviour (except for one test, more on this later). Here are the results: ==> unixbench-generic.txt <== TEST BASELINE RESULT INDEX Dhrystone 2 using register variables 116700.0 13828395.1 1185.0 Double-Precision Whetstone 55.0 4104.6 746.3 Execl Throughput 43.0 885.8 206.0 File Copy 1024 bufsize 2000 maxblocks 3960.0 142874.0 360.8 File Copy 256 bufsize 500 maxblocks 1655.0 65855.0 397.9 File Copy 4096 bufsize 8000 maxblocks 5800.0 146858.0 253.2 Pipe Throughput 12440.0 936158.7 752.5 Pipe-based Context Switching 4000.0 54004.1 135.0 Process Creation 126.0 1519.2 120.6 Shell Scripts (8 concurrent) 6.0 343.0 571.7 System Call Overhead 15000.0 578046.6 385.4 ==> unixbench-xenhvm.txt <== TEST BASELINE RESULT INDEX Dhrystone 2 using register variables 116700.0 13718470.5 1175.5 Double-Precision Whetstone 55.0 912662.7 165938.7 Execl Throughput 43.0 750.2 174.5 File Copy 1024 bufsize 2000 maxblocks 3960.0 96273.0 243.1 File Copy 256 bufsize 500 maxblocks 1655.0 79155.0 478.3 File Copy 4096 bufsize 8000 maxblocks 5800.0 91023.0 156.9 Pipe Throughput 12440.0 872682.9 701.5 Pipe-based Context Switching 4000.0 50348.4 125.9 Process Creation 126.0 1511.7 120.0 Shell Scripts (8 concurrent) 6.0 225.9 376.5 System Call Overhead 15000.0 561000.3 374.0 Only in the whetstone test (and consistently only in this one across multiple runs) is the test timing very visibly screwed up, which is obvious if observing test execution: it takes orders of magnitude longer than it should (hours) and produces order of magnitude "better" results than it should. It skews the "final score" so its unusable. Whetstone is FPU-intensive, unique among these tests. Just for comparison, here are the results on bare hardware, same OS & base hardware (MBO, CPU, RAM), different drives & number of CPUs: TEST BASELINE RESULT INDEX Dhrystone 2 using register variables 116700.0 15419836.3 1321.3 Double-Precision Whetstone 55.0 3566.9 648.5 Execl Throughput 43.0 2512.4 584.3 Pipe Throughput 12440.0 1079209.1 867.5 Pipe-based Context Switching 4000.0 94001.1 235.0 Process Creation 126.0 4752.0 377.1 System Call Overhead 15000.0 676244.5 450.8 The result of whetstone indicates that even the GENERIC kernel might have similar timer problems.