Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 02 May 2011 21:53:31 -0500
From:      Rusty Nejdl <rnejdl@ringofsaturn.com>
To:        <freebsd-emulation@freebsd.org>
Subject:   Re: virtualbox I/O 3 times slower than KVM?
Message-ID:  <assp.010456e99d.1d7759f6aae7910f92914a10232b4ee1@ringofsaturn.com>
In-Reply-To: <BANLkTikf2JU_KKp1tt2j8DkaqYBxMzWerw@mail.gmail.com>
References:  <10651953.1304315663013.JavaMail.root@mswamui-blood.atl.sa.earthlink.net> <BANLkTikyzGZt6YUWVc3KiYt_Of0gEBUp%2Bg@mail.gmail.com> <4DBEFBD8.8050107@mittelstaedt.us> <BANLkTi=dhtSFJm_gZhHTu1ohyE2-kQgy_A@mail.gmail.com> <4DBF227A.1000704@mittelstaedt.us> <BANLkTikf2JU_KKp1tt2j8DkaqYBxMzWerw@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, 2 May 2011 21:39:38 -0500, Adam Vande More wrote:
> On Mon, May 2, 2011 at 4:30 PM, Ted Mittelstaedt 
> <tedm@mittelstaedt.us>wrote:
>
>> that's sync within the VM.  Where is the bottleneck taking place?  
>> If
>> the bottleneck is hypervisor to host, then the guest to vm write may 
>> write
>> all it's data to a memory buffer in the hypervisor that is then
>> slower-writing it to the filesystem.  In that case killing the guest 
>> without
>> killing the VM manager will allow the buffer to complete emptying 
>> since the
>> hypervisor isn't actually being shut down.
>
>
> No the bottle neck is the emulated hardware inside the VM process
> container.  This is easy to observe, just start a bound process in 
> the VM
> and watch top host side.  Also the hypervisor uses native host IO 
> driver,
> there's no reason for it to be slow.  Since it's the emulated NIC 
> which is
> the bottleneck, there is nothing left to issue the write.  Further 
> empirical
> evidence for this can be seen by by watching gstat on VM running with 
> an md
> or ZVOL backed storage.  I already utilize ZVOL's for this so it was 
> pretty
> easy to confirm no IO occurs when the VM is paused or shutdown.
>
> Is his app going to ever face the extremely bad scenario, though?
>>
>
> The point is it should be relatively easy to induce patterns you 
> expect to
> see in production.  If you can't, I would consider that a problem.  
> Testing
> out theories(performance based or otherwise) on a production system 
> is not a
> good way to keep the continued faith of your clients when the 
> production
> system is a mission critical one.  Maybe throwing more hardware at a 
> problem
> is the first line of defense for some companies, unfortunately I 
> don't work
> for them.  Are they hiring? ;)  I understand the logic of such an 
> approach
> and have even argued for it occasionally.  Unfortunately payroll is 
> already
> in the budget, extra hardware is not even if it would be a net 
> savings.

I'm going to ask a stupid question... are you using bridging for your 
emulated NIC?  At least, that's how I read what you wrote that you are 
starved at the NIC side and I saw a vast performance increase switching 
to bridging.

Sincerely,
Rusty Nejdl



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?assp.010456e99d.1d7759f6aae7910f92914a10232b4ee1>