From owner-freebsd-virtualization@freebsd.org Tue Oct 30 01:10:12 2018 Return-Path: Delivered-To: freebsd-virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 1547010EF976 for ; Tue, 30 Oct 2018 01:10:12 +0000 (UTC) (envelope-from dmarquess@gmail.com) Received: from mail-ed1-x52d.google.com (mail-ed1-x52d.google.com [IPv6:2a00:1450:4864:20::52d]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 823388DF84 for ; Tue, 30 Oct 2018 01:10:11 +0000 (UTC) (envelope-from dmarquess@gmail.com) Received: by mail-ed1-x52d.google.com with SMTP id y20-v6so9014815eds.10 for ; Mon, 29 Oct 2018 18:10:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=gBjgq4ibg4bdozG+cv4M/Cg3KxvdJpdfICvKfPfosh8=; b=KQbOyyTHWbW/UWGv2qWmSJBrfhZduffbQdYp7BMZR6jCWFOG3JDSGhI4+x8CxnAWTA GDrQJxPkBz3PU69XvDujPEUKjcihpk+qtF+rTq0M0BzjmXlpufOD/s7ICUU3CGn9vPRP 27UTIdSUJIq0h7ogmESX3uedqKdGCFFG4cIfO2rxW0uNa1QtGFopQCvLFg6vGycBHC+8 AJmFY+cAiVhf5TjoYkqReitRRf0gL7SLxQuFYWqmU4gnsXYkMc9Ybgf23YzA952jMZYm QPruX51+zafWWBRUzgfMXT+ke0qiVjn7e3f0yEdgu3snQyH93LvPHhgHGjnB3sO/SIsZ ggHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=gBjgq4ibg4bdozG+cv4M/Cg3KxvdJpdfICvKfPfosh8=; b=fWpSNXH60jS/8WOXRqrrpWwDapqvZ2Z1MXm4UQZL4qrUHpdBwb1GGb0U2Sd/jv9k2U BMeFUS6gFCura0do1d9sw5uXfVkSj1+EtnDz2fbQBNA802U1IO3GrOStbQkslYG/n3t1 HxzVNu1jwJAPh+ZGWegRFkkEeiO/w4t+YxtU3GPvOCkPUHBJS4IWIGZBnb/H5UaLCCvj zfPOXXfx5dZweYNvDdGmNuE9yEiQFDvU/wZoU44bmfzCoob0eB3GPcDzVcl33ZNOzGK2 TF1UK8f3D1cd25xs7z4ijU7ZTPuK0JgcttH5+0sHDxlQctSIgheACl0fOpyTDWbOwgkj laIQ== X-Gm-Message-State: AGRZ1gI/8s/hRuq6bmBKHDWgMV+u6VEwzLt5RRc1AYYpzfFB+H+n4p45 YT06JNiAk5VUF4nfzXyfeP1K8NqxrsMwIXz6JJc= X-Google-Smtp-Source: AJdET5c26wzk6t9zgBDKFSPRlTHQUPBqzA8V8U5pGRRnKlslnzkcLZLYPs9LqHvPEKMRH+EftYXpcXeyXEgjwYOrryU= X-Received: by 2002:a17:906:338e:: with SMTP id v14-v6mr11497715eja.201.1540861810303; Mon, 29 Oct 2018 18:10:10 -0700 (PDT) MIME-Version: 1.0 References: <9e7f4c01-6cd1-4045-1a5b-69c804b3881b@omnilan.de> In-Reply-To: From: Dustin Marquess Date: Mon, 29 Oct 2018 20:09:59 -0500 Message-ID: Subject: Re: bhyve win-guest benchmark comparing To: freebsd@omnilan.de Cc: FreeBSD virtualization Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.29 X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 30 Oct 2018 01:10:12 -0000 It would be interesting to test running it under Xen with FreeBSD as the dom0. -Dustin On Sat, Oct 27, 2018 at 1:04 PM Harry Schmalzbauer wrote: > Am 22.10.2018 um 13:26 schrieb Harry Schmalzbauer: > =E2=80=A6 > > Test-Runs: > > Each hypervisor had only the one bench-guest running, no other > > tasks/guests were running besides system's native standard processes. > > Since the time between powering up the guest and finishing logon > > differed notably (~5s vs. ~20s) from one host to the other, I did a > > quick synthetic IO-Test beforehand. > > I'm using IOmeter since heise.de published a great test pattern called > > IOmix =E2=80=93 about 18 years ago I guess. This access pattern has al= ways > > perfectly reflected the system performance for human computer usage > > with non-caculation-centric applications, and still is my favourite, > > despite throughput and latency changed by some orders of manitudes > > during the last decade (and I had defined something for "fio" which > > mimics IOmix and shows reasonable relational results; but I'm still > > prefering IOmeter for homogenous IO benchmarking). > > > > The results is about factor 7 :-( > > ~3800iops&69MB/s (CPU-guest-usage 42%IOmeter+12%irq) > > vs. > > ~29000iops&530MB/s (CPU-guest-usage 11%IOmeter+19%irq) > > > > > > [with debug kernel and debug-malloc, numbers are 3000iops&56MB/s, > > virtio-blk instead of ahci,hd: results in 5660iops&104MB/s with > > non-debug kernel > > =E2=80=93 much better, but even higher CPU load and still factor 4= slower] > > > > What I don't understand is, why the IOmeter process differs that much > > in CPU utilization!?! It's the same binary on the same OS (guest) > > with the same OS-driver and the same underlying hardware =E2=80=93 "jus= t" the > > AHCI emulation and the vmm differ... > > > > Unfortunately, the picture for virtio-net vs. vmxnet3 is similar sad. > > Copying a single 5GB file from CIFS share to DB-ssd results in 100% > > guest-CPU usage, where 40% are irqs and the throughput max out at > > ~40MB/s. > > When copying the same file from the same source with the same guest on > > the same host but host booted ESXi, there's 20% guest-CPU usage while > > transfering 111MB/s =E2=80=93 the uplink GbE limit. > > > > These synthetic benchmark very well explain the "feelable" difference > > when using a guest between the two hypervisors, but > =E2=80=A6 > > To add an additional and rather surprinsing result, at least for me: > > Virtualbox provides > 'VBoxManage internalcommands createrawvmdk -filename > "testbench_da0.vmdk" -rawdisk /dev/da0' > > So I could use the exactly same test setup as for ESXi and bhyve. > FreeBSD-Virtualbox (running on the same host installation like bhyve) > performed quiet well, although it doesn't survive IOmix benchmark run > when the "testbench_da0.vmdk" (the "raw" SSD-R0-array) is hooked up to > the SATA controller. > But connected to the emulated SAS controller(LSI1068), it runs without > problems and results in 9600iops@185MB/s with 1%IOmeter+7%irq CPU > utilization (yes, 1% vs. 42% for IOmeter load). > Still far away from what ESXi provides, but almost double performance of > virtio-blk with bhyve, and most important, much less load (host and > guest show exactly the same low values as opposed to the very high loads > which are shown on host and guest with bhyve:virtio-blk). > The HDtune random access benchmark also shows the factor 2, linear over > all block sizes. > > Virtualbox's virtio-net setup gives ~100MB/s with peaks at 111 and ~40% > CPU load. > Guest uses the same driver like with bhyve:virtio-blk, while backend of > virtualbox:virtio-net is vboxnetflt utilizing netgraph and vboxnetadp.ko > vs. tap(4). > So not only the IO efficiency (lower throughput but also much lower CPU > utilization) is remarbably better, but also the network performance. > Even low-bandwidth RDP sessions via GbE-LAN suffer from micro hangs > under bhyve and virtio-net. And 40MB/s transfers cause 100% CPU load on > bhyve =E2=80=93 both runs had exactly the same WIndows virtio-net driver = in use > (RedHat 141). > > Conclusion: Virtualbox vs. ESXi shows a 0.5% efficiency factor, while > bhyve vs. ESXi shows 0.25% overall efficiency factor. > I tried to provide a test environment with shortest hardware paths > possible. At least the benchmarks were run 100% reproducable with the > same binaries. > > So I'm really interested if > =E2=80=A6 > > Are these (emulation(only?) related, I guess) performace issues well > > known? I mean, does somebody know what needs to be done in what area, > > in order to catch up with the other results? So it's just a matter of > > time/resources? > > Or are these results surprising and extensive analysis must be done > > before anybody can tell how to fix the IO limitations? > > > > Is the root cause for the problematic low virtio-net throughput > > probably the same as for the disk IO limits? Both really hurt in my > > use case and the host is not idling in relation, but even showing > > higher load with lower results. So even if the lower > > user-experience-performance would be considered as toleratable, the > > guests/host ratio was only half dense. > > Thanks, > > -harry > > _______________________________________________ > freebsd-virtualization@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization > To unsubscribe, send any mail to " > freebsd-virtualization-unsubscribe@freebsd.org" >