Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 11 Feb 2014 00:18:08 -0800
From:      Ted Mittelstaedt <tedm@mittelstaedt.us>
To:        freebsd-emulation@freebsd.org
Subject:   Re: virtualbox tips for performance specific to FreeBSD-10 hosts
Message-ID:  <52F9DCC0.6010904@mittelstaedt.us>
In-Reply-To: <CAE-m3X3vuYDqkvMQOAWbZmieoJ-oJH2Sgih-8c-SBSWR7Qcyvw@mail.gmail.com>
References:  <20140208095107.GA1232@potato.growveg.org> <CAE-m3X3vuYDqkvMQOAWbZmieoJ-oJH2Sgih-8c-SBSWR7Qcyvw@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Arrg!!  After reading this I just had to jump in.

On 2/10/2014 12:39 AM, Bernhard Fröhlich wrote:
> On Sat, Feb 8, 2014 at 10:51 AM, John<freebsd-lists@potato.growveg.org>  wrote:
>> Hello list,
>>
>> As subject, can anyone recommend any tips for getting optimum
>> performance from various guests on a freebsd-10-R host? I've looked at
>> the page about changing polling interval but that only applies to
>> freebsd guests. I'm looking advice specifically for linux guests on this
>> host. Guests are ubuntu and opensuse.
>>
>> Machine is a Xeon E5-2650L @ 1.80GHz with 32 cores, 192GB RAM and 10TB
>> available storage on zfs.
>
> I would not recommend using VirtualBox on such a box. VirtualBox is a Desktop
> Virtualisation product and that specs are too high to make good use of them.
>

Rubbish!!!!!!  What is the goal here?  To run a couple sessions or lots 
and lots?  He said "various guests" you seem to be assuming he wants to
run a couple hundred on the server.

> One issue you will run into is ZFS ARC - with that amount of memory it will take
> quite some time to fill up but ZFS ARC and VirtualBox wired memory will start
> fighting each other. So I recommend limiting ZFS ARC to some sane amount.
> (32GB?)
>

Yes you should limit ZFS to a reasonable amount of ram unless your goal 
is to run a fileserver, of course.  But if your not running a fileserver 
then why are you running ZFS to begin with?

And, what disks and what array are you running?  A bunch of 7200 rpm
SATA desktop disks on motherboard SATA ports or a hardware array card
with 15,000 RPM drives in it?

> VirtualBox has quite a bit more overhead than all the other server
> grade virtualisation
> products out there and that is especially true for I/O.

Any virtualizer incurs significant I/O penalty, that is really a
separate discussion.  If that's a problem for you you shouldn't be
virtualizing.  But if it's not, then ram and CPU is not an issue.

I ran VirtualBox on a 2GB FreeBSD for years with no noticeable impact on 
the 2GB server nor any noticeable impact to the guest.

> With FreeBSD
> 10 you should
> already use AHCI on the host and the linux guests will likely use an
> SATA controller
> and AHCI too. That should be the minimum and is also what we have already
> written down in the vbox tuning notes:
>
> https://wiki.freebsd.org/VirtualBox/Tuning
>
>> No issues sofar freebsd on freebsd. Just looking for advice linux guest
>> on freebsd with virtualbox. Or perhaps there's something better than
>> virtualbox? Basically all guests need to be isolated from one another.
>> It makes backing up and restoring systems pain-free.
>
> So far this are the mainstream candidates:
>
> BSD:
> - BHyVe
>
> Linux:
> - KVM
> - Xen
>
> Proprietary:
> - VMware ESXi
> - XenServer
>
> I have had a look at a server virtualisation product myself some months ago
> and came to the conclusion that there is no painfree server virtualisation
> product that is focused on ONE machine with a proper webinterface with a
> reasonable easy installation and for free.
>

A webinterface is unnecessary and IMHO is quite undesirable.

> VMware ESXi came close but it fails badly with the webinterface (no I don't
> consider the required 8-16GB RAM for the webinterface appropriate).
>
> RedHats oVirt also made a good impression but I broke it within half an hour
> and the webinterface never detected the nodes properly. Also oVirt really wants
> you to have a dedicated storage box which I wanted to avoid. Running it all on
> one machine is on their todo for years. No I don't want to run 3 boxes for two
> VMs.
>
> OpenStack was just a mess and I gave up because all the different components
> didn't fit into my small head. This really looks like you want to build your own
> cloud with 1000+ nodes.
>
> There are quite a few more KVM based products out there which I didn't try
> just because I gave up at that point and went with the free ESXi and a windows
> VM with vSphere Client. The ones that I remember were also appropriate candiates
> for my search were Ubuntu 12.04 LTS with KVM + convirt2, Proxmox or a simple
> Linux distribution and shell.
>
> So VirtualBox + phpvirtualbox does a few things very very good and I love it
> for that on small boxes with light load but it's not a proper server
> virtualisation
> product.
>

That is simply not true.  "a proper server virtualization product". 
This kind of loose definition does not promote clarity.  A "small box" 
today?  Let me tell you, in this city people are DISCARDING 64 bit 8GB 
servers with 2.5Ghz 4 core CPUs due to obsolescense.

5 years ago that would have NOT been considered a "small" box.  This
is a very relative statement.

If for example you happen to have a server that's doing some real work 
and you happen to need a couple sessions virtualized on it well then
VirtualBox or Vmware WMPlayer or any of those "desktop" virtualizers are
just fine. In that case, those ARE proper server virtualizers.

> What I would really like to see is a FreeNAS like appliance for virtualisation
> with a webinterface and based on BHyVe.

Uh huh.

  The Linux KVM stuff is not quite
> there yet when it comes to "painless".
>

You can boot a system off a Ubuntu CD and have a desktop up and running 
in 20 minutes - even on a fakeraid array - and you can download the 
VMWare Player from ESXi and install it with something like 2 commands. 
It doesn't get any easier than that - even HyperV from MS is not any easier.

Or, ESX.  That's Linux under the hood and it's unbelievably painless.

Look, you have to accept a fundamental basic about virtualization:

GUESTS DO NOT RUN AS FAST WHEN TRULY VIRTUALIZED.


When your building a server if you virtualize your fundamentally doing 
it to separate apps.  Encapsulating the app within an entire OS makes it 
a lot easier to deal with moving the app around and keeping other
apps from interfering with it.  So you would want a completely isolated
guest.

But, the benefit of virtualizing comes at a cost of performance.  that 
is just the way it works.  For most people who virtualize, it's more 
important to have that separation than it is to have performance.

If your goal is to build a fileserver then in my view your foolish
to virtualize it.  That is why oVirt calls for a separate fileserver.
And even in big ESXi servers, the trend is to setup an NFS server.
That is what I do when I build ESXi servers.

Now, with all of that said, I will say this:

If your goal is to run MAINLY Linux guests, and run A LOT OF THEM, and 
NOT run any apps on the server at all, then in my view there's really 
only 2 games in town out there:  ESXi and Xen server.

The reason for this is that both of them provide guest additions that
contain hardware assisted paravirtualizer device drivers for the
critical I/O.  Virtualbox only does this for the network driver.
Xen does it for the kernel and disk for many OS's and they even
now have drivers for windows (Xen Windows GplPv)  And ESX also does it 
for the network driver and disk driver.

If your going to run hundreds of sessions on a single server than
you need hardware assisted paravirtualization drivers.  Otherwise, if 
you just need a few guests, and they aren't going to be doing much, then 
it makes no difference what virtualizer your using on a 64 bit system 
with hardware virtualization instructions in the CPU since
they all will use the CPU for the virtualization.

Ted



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?52F9DCC0.6010904>