From owner-freebsd-emulation@FreeBSD.ORG Fri Nov 23 15:52:09 2012 Return-Path: Delivered-To: freebsd-emulation@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id EE2253E0; Fri, 23 Nov 2012 15:52:08 +0000 (UTC) (envelope-from alexclear@gmail.com) Received: from mail-pb0-f54.google.com (mail-pb0-f54.google.com [209.85.160.54]) by mx1.freebsd.org (Postfix) with ESMTP id B02B08FC0C; Fri, 23 Nov 2012 15:52:08 +0000 (UTC) Received: by mail-pb0-f54.google.com with SMTP id wz12so6854058pbc.13 for ; Fri, 23 Nov 2012 07:52:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=pyPvbu7XXFnzaT0H1j9AGa8fDI5LWA82Ak/N6EE6VCM=; b=bLd07XN23cz+OjH72SkIAf1MTnGEDRs1BHHI+EVXVISbVWOZEAFpgFwXPQhHsblEam ucimBbyrqh9sLUc+iEBbPQALWNmifOPgZNAjTE9zrDmoUaoQIgJnbbc78an/A06Ns6L2 P5JHNm+lZsJwjx1dsSCA0/8r+AZP4S9vf7kgH7Sjo3LYHbe0ym8g5z+MAsJigvjW3/M5 MgvysC1xVQUijWc8ZYZElxNlOv5Hbpzg1Wv//D5NhXu0nO1XugSmiXi2JuYpWea6E3zV a2GlHqFfFQnvRFt+r0cKxIQwKwduMTltDQE1ukibqD+M+80bEZX+gwEme/Zw8o/+/R3X oaMg== MIME-Version: 1.0 Received: by 10.68.191.200 with SMTP id ha8mr15220190pbc.51.1353685928161; Fri, 23 Nov 2012 07:52:08 -0800 (PST) Received: by 10.66.23.198 with HTTP; Fri, 23 Nov 2012 07:52:08 -0800 (PST) In-Reply-To: References: Date: Fri, 23 Nov 2012 18:52:08 +0300 Message-ID: Subject: Re: VirtualBox 4.2.4 on FreeBSD 9.1-PRERELEASE problem: VMs behave very different when pinned to different cores From: Alex Chistyakov To: =?ISO-8859-1?Q?Bernhard_Fr=F6hlich?= Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: "freebsd-emulation@freebsd.org" X-BeenThere: freebsd-emulation@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Development of Emulators of other operating systems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 23 Nov 2012 15:52:09 -0000 On Fri, Nov 23, 2012 at 6:20 PM, Bernhard Fr=F6hlich wr= ote: > On Fri, Nov 23, 2012 at 2:15 PM, Alex Chistyakov wr= ote: >> Hello, >> >> I am back with another problem. As I discovered previously setting a >> CPU affinity explicitly helps to get decent performance on guests, but >> the problem is that guest performance is very different on core #0 and >> cores #5 or #7. Basically when I use 'cpuset -l 0 VBoxHeadless -s >> "Name" -v on' to start the VM it is barely usable at all. The best >> performance results are on cores #4 and #5 (I believe they are the >> same physical core due to HT). #7 and #8 are twice as slow as #5, #0 >> and #1 are the slowest and other cores lay in the middle. >> If I disable a tickless kernel on the guest running on #4 or #5 it >> becomes as slow as a guest running on #7 so I suspect this is a >> timer-related issue. >> I also discovered that there are quite a lot of system interrupts on >> slow guests (%si is about 10-15) but Munin does not render them on its >> CPU graphs for some reason. >> All my VMs are on cores #4 and #5 right now but I want to utilize >> other cores too. I am not sure what to do next, this looks like a >> VirtualBox bug. What can be done to solve this? > > I do not want to sound ignorant but what do you expect? Each VBox > VM consists of somewhere around 15 threads and some of them are the > vCPUs. You bind them all to the same CPU so they will fight for CPU time > on that single core and latency will get unpredictable as well as > performance. And then you add more and more craziness by running > it on cpu0 and a HT enabled CPU ... Your point regarding HTT is perfectly valid so I just disabled it in BIOS. Unfortunately it did not help. When I run a single VM on CPU #0 I get the following load pattern on the ho= st: last pid: 2744; load averages: 0.93, 0.63, 0.31 up 0+00:05:25 19:37:17 368 processes: 8 running, 344 sleeping, 16 waiting CPU 0: 14.7% user, 0.0% nice, 85.3% system, 0.0% interrupt, 0.0% idle CPU 1: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle CPU 2: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle CPU 3: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle CPU 4: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle CPU 5: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle Mem: 410M Active, 21M Inact, 921M Wired, 72K Cache, 60G Free ARC: 136M Total, 58M MRU, 67M MFU, 272K Anon, 2029K Header, 8958K Other Swap: 20G Total, 20G Free And when I run it on CPU #4 the situation is completely different: last pid: 2787; load averages: 0.05, 0.37, 0.31 up 0+00:11:45 19:43:37 368 processes: 9 running, 343 sleeping, 16 waiting CPU 0: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle CPU 1: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle CPU 2: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle CPU 3: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle CPU 4: 1.8% user, 0.0% nice, 11.0% system, 0.0% interrupt, 87.2% idle CPU 5: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle Mem: 412M Active, 20M Inact, 1337M Wired, 72K Cache, 60G Free ARC: 319M Total, 136M MRU, 171M MFU, 272K Anon, 2524K Header, 9340K Other Swap: 20G Total, 20G Free Regarding pinning the VM to a certain core - yes, I agree with you, it's better not to pin VMs explicitly but I was forced to do this. If I do not pin the VM explicitly it gets scheduled to the "bad" core sooner or later and the whole VM gets unresponsive. And I was able to run as many as 6 VMs on HTT cores #4/#5 quite successfully. These VMs were staging machines without too much load on them but I wanted to put some production resources on this host too - that's why I wanted to know how to utilize other cores safely. Thank you, -- SY, Alex