Date: Tue, 28 Jan 2014 18:30:26 +0100 From: Andrea Brancatelli <abrancatelli@schema31.it> To: Peter Grehan <grehan@freebsd.org> Cc: "freebsd-virtualization@freebsd.org" <freebsd-virtualization@freebsd.org> Subject: Re: BHyVe - ESXi comparison Message-ID: <CADfWLemRZq233Rd1d5r=r6LGkTMw1aVm9wGMh1g=m5VghQ2gTA@mail.gmail.com> In-Reply-To: <52E7D666.30503@freebsd.org> References: <CADfWLe=zOc2CYRXf8ZuG4uZqN%2BMBck4y1JoDcmrX--JqAgDSQw@mail.gmail.com> <52E7D666.30503@freebsd.org>
next in thread | previous in thread | raw e-mail | index | archive | help
Hello Peter, unfortunately we've been a bit sloppy in tracking the time output because initially it was just an internal test, thus we don't have the details. We're setting up a new round of tests we'll run tomorrow and we'll track user/system/real in a more precise way; I will also publish a graph with the three stacked piles. Hyperthreading should hopefully be enable on the host, frankly I didn't check it out, I will tomorrow. KVM and QEMU are a bit out of our scope, so we didn't have plans for that. If I can fine some spare time we'll try. On Tue, Jan 28, 2014 at 5:10 PM, Peter Grehan <grehan@freebsd.org> wrote: > Hi Andrea, > > > We did a very rough comparison betweend BHyVe and VMWare ESXi. Maybe >> you want to give it a read and let me know if I did write a bunch of >> sh!t :-) >> > > Looks good to me :) Thanks for running the tests. > > Would you be able to list the command options you used with bhyve when > running these tests ? > > What I couldn=E2=80=99t really understand (but that=E2=80=99s something = not related >> to bhyve or VMWare) is how a multiprocessor machine is slower than a >> singleprocessor machine in doing the compilation=E2=80=A6 any idea? >> > > Is hyper-threading enabled on your system ? If not, then with a host onl= y > having 2 CPUs and a 2 vCPU guest, there isn't as much opportunity to > overlap host i/o threads with vCPU threads. > > It would be interesting to see your "time" results when running bhyve to > show %user/%system etc - that may give an indication of how much time is > spent on 'overhead' CPU usage as opposed to pure vCPU usage. > > > 20 VM =E2=80=93 2 CPUs =E2=80=93 2GB RAM > > Interesting result to say the least :) > > I'll try and repro this and see if it's something simple. At first guess > I'd say it's the classic 'lock-holder-preemption' issue that the ESXi > scheduler has a lot of smarts to avoid. > > Another interesting test would be Qemu/KVM VMs on Linux to see if it has > the same issue. > > later, > > Peter. > --=20 *Andrea BrancatelliSchema 31 S.r.l. - Socio UnicoResponsabile ITROMA - FIRENZE - PALERMO ITALYTel: +39. 06.98.358.472* *Cell: +39 331.2488468Fax: +39. 055.71.880.466Societ=C3=A0 del Gruppo SC31 ITALIA*
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CADfWLemRZq233Rd1d5r=r6LGkTMw1aVm9wGMh1g=m5VghQ2gTA>