Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 2 May 2011 14:43:38 -0500
From:      Adam Vande More <amvandemore@gmail.com>
To:        Ted Mittelstaedt <tedm@mittelstaedt.us>
Cc:        freebsd-emulation@freebsd.org
Subject:   Re: virtualbox I/O 3 times slower than KVM?
Message-ID:  <BANLkTi=dhtSFJm_gZhHTu1ohyE2-kQgy_A@mail.gmail.com>
In-Reply-To: <4DBEFBD8.8050107@mittelstaedt.us>
References:  <10651953.1304315663013.JavaMail.root@mswamui-blood.atl.sa.earthlink.net> <BANLkTikyzGZt6YUWVc3KiYt_Of0gEBUp%2Bg@mail.gmail.com> <4DBEFBD8.8050107@mittelstaedt.us>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, May 2, 2011 at 1:45 PM, Ted Mittelstaedt <tedm@mittelstaedt.us>wrote:

> On 5/2/2011 5:09 AM, Adam Vande More wrote:
>
>> On Mon, May 2, 2011 at 12:54 AM, John<aqqa11@earthlink.net>  wrote:
>>
>>  On both the FreeBSD host and the CentOS host, the copying only takes 1
>>> second, as tested before.  Actually, the classic "dd" test is slightly
>>> faster on the FreeBSD host than on the CentOS host.
>>>
>>> The storage I chose for the virtualbox guest is a SAS controller.  I
>>> found
>>> by default it did not enable "Use Host I/O Cache".  I just enabled that
>>> and
>>> rebooted the guest.  Now the copying on the guest takes 3 seconds.
>>>  Still,
>>> that's clearly slower than 1 second.
>>>
>>> Any other things I can try?  I love FreeBSD and hope we can sort this
>>> out.
>>>
>>>
>> Your FreeBSD Host/guest results seem relatively consistent with what I
>> would
>> expect since VM block io isn't really that great yet, however the results
>> in
>> your Linux VM seems too good to be true.
>>
>
> We know that Linux likes to run with the condom off on the file system,
> (async writes) just because it helps them win all the know-nothing
> benchmark contests in the ragazines out there, and FreeBSD does not
> because it's users want to have an intact filesystem in case the
> system crashes or loses power.  I'm guessing this is the central issue
> here.
>
>
>  Have you tried powering off the
>> Linux VM immediately after the cp exits and md5'ing the two files?  This
>> will insure your writes are completing successfully.
>>
>>
> That isn't going to do anything because the VM will take longer than 3
> seconds to close and it it's done gracefully then the VM won't close until
> the writes are all complete.


No, this is no correct.  You can kill the VM before it has a chance to
sync(in Vbox, the poweroff button does this, and the qemu/kvm stop command
is not a graceful shutdown either).  I haven't actually tested this, but it
would seem to be a large bug if doesn't work this way since there are also
graceful shutdown options in both hypervisors and the documenation states
you may lose data with this option.  If nothing else, the real power cord
will do the same thing.


> http://ivoras.sharanet.org/blog/tree/2009-12-02.using-ministat.html
>> http://lists.freebsd.org/pipermail/freebsd-current/2011-March/023435.html
>>
>>
> However that tool doesn't mimic real world behavior, either.


That tool is for analyzing benchmarks, not running them.


> The only
> real way to test is to run both systems in production and see what
> happens.
>

Any dev/testing environment I setup or worked with has a method for
simulating extremely bad scenarios production might face like 10,000 devices
phoning home at once to an aggregation network, with an equally severe load
coming from the web frontend.  I thought this was pretty common practice.

I would not make a choice of going with one system over another based
> on a single large file write difference of 2 seconds.  We have to
> assume he's got an application that makes hundreds to thousands of large
> file writes where this discrepancy would actually make a difference.
>

>From the information given, that's not an assumption I'm comfortable with.
OP will have to find his own way on that whether it's something like
blogbench or bonnie or "real data" with real data being the best.  Agreed
that discrepancy surely would make a difference if it's consistent across
his normal workload.  However, there are many cases where that might not be
true.

-- 
Adam Vande More



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?BANLkTi=dhtSFJm_gZhHTu1ohyE2-kQgy_A>