From owner-freebsd-emulation@FreeBSD.ORG Tue May 3 02:53:33 2011 Return-Path: Delivered-To: freebsd-emulation@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 243C6106566B for ; Tue, 3 May 2011 02:53:33 +0000 (UTC) (envelope-from rnejdl@ringofsaturn.com) Received: from tethys.ringofsaturn.com (tethys.ringofsaturn.com [71.252.219.43]) by mx1.freebsd.org (Postfix) with ESMTP id C6E2B8FC0A for ; Tue, 3 May 2011 02:53:32 +0000 (UTC) Received: from ASSP.nospam (tethys [71.252.219.43]) (authenticated bits=0) by tethys.ringofsaturn.com (8.14.4/8.14.4) with ESMTP id p432rWWd036342 for ; Mon, 2 May 2011 21:53:32 -0500 (CDT) (envelope-from rnejdl@ringofsaturn.com) Received: from mail.ringofsaturn.com ([71.252.219.43] helo=mail.ringofsaturn.com) with IPv4:25 by ASSP.nospam; 2 May 2011 21:53:31 -0500 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Date: Mon, 02 May 2011 21:53:31 -0500 From: Rusty Nejdl To: Mail-Reply-To: In-Reply-To: References: <10651953.1304315663013.JavaMail.root@mswamui-blood.atl.sa.earthlink.net> <4DBEFBD8.8050107@mittelstaedt.us> <4DBF227A.1000704@mittelstaedt.us> Message-ID: X-Sender: rnejdl@ringofsaturn.com User-Agent: Roundcube Webmail/0.6-svn X-Assp-Intended-For-IP: 71.252.219.43 X-Assp-Passing: authenticated X-Assp-ID: ASSP.nospam (m-30439-07316) X-Assp-Version: 1.8.5.6(1.0.05) Subject: Re: virtualbox I/O 3 times slower than KVM? X-BeenThere: freebsd-emulation@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: rnejdl@ringofsaturn.com List-Id: Development of Emulators of other operating systems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 03 May 2011 02:53:33 -0000 On Mon, 2 May 2011 21:39:38 -0500, Adam Vande More wrote: > On Mon, May 2, 2011 at 4:30 PM, Ted Mittelstaedt > wrote: > >> that's sync within the VM. Where is the bottleneck taking place? >> If >> the bottleneck is hypervisor to host, then the guest to vm write may >> write >> all it's data to a memory buffer in the hypervisor that is then >> slower-writing it to the filesystem. In that case killing the guest >> without >> killing the VM manager will allow the buffer to complete emptying >> since the >> hypervisor isn't actually being shut down. > > > No the bottle neck is the emulated hardware inside the VM process > container. This is easy to observe, just start a bound process in > the VM > and watch top host side. Also the hypervisor uses native host IO > driver, > there's no reason for it to be slow. Since it's the emulated NIC > which is > the bottleneck, there is nothing left to issue the write. Further > empirical > evidence for this can be seen by by watching gstat on VM running with > an md > or ZVOL backed storage. I already utilize ZVOL's for this so it was > pretty > easy to confirm no IO occurs when the VM is paused or shutdown. > > Is his app going to ever face the extremely bad scenario, though? >> > > The point is it should be relatively easy to induce patterns you > expect to > see in production. If you can't, I would consider that a problem. > Testing > out theories(performance based or otherwise) on a production system > is not a > good way to keep the continued faith of your clients when the > production > system is a mission critical one. Maybe throwing more hardware at a > problem > is the first line of defense for some companies, unfortunately I > don't work > for them. Are they hiring? ;) I understand the logic of such an > approach > and have even argued for it occasionally. Unfortunately payroll is > already > in the budget, extra hardware is not even if it would be a net > savings. I'm going to ask a stupid question... are you using bridging for your emulated NIC? At least, that's how I read what you wrote that you are starved at the NIC side and I saw a vast performance increase switching to bridging. Sincerely, Rusty Nejdl