Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 8 Jun 2017 05:35:26 -0700
From:      Anish <akgupt3@gmail.com>
To:        Harry Schmalzbauer <freebsd@omnilan.de>
Cc:        "freebsd-virtualization@freebsd.org" <freebsd-virtualization@freebsd.org>
Subject:   Re: PCIe passthrough really that expensive?
Message-ID:  <CALnRwMRst1d_O_ix-_JaS=tH8=dPtNNkDo9WyzRH1_nBi1N6zA@mail.gmail.com>
In-Reply-To: <59383F5C.8020801@omnilan.de>
References:  <59383F5C.8020801@omnilan.de>

next in thread | previous in thread | raw e-mail | index | archive | help
Hi Harry,
>I thought I'd save these expensive VM_Exits by using the passthru path.
Completely wrong, is it?

It depends on which processor you are using. For example APICv was
introduced in IvyBridge which enabled h/w assisted localAPIC rather than
using s/w emulated, bhyve supports it on Intel processors.

Intel Broadwell introduced PostedInterrupt which enabled interrupt to
delivered to guest directly, bypassing hypervisor[2] for
passthrough devices. Emulated devices interrupt will still go through
hypervisor.

You can verify capability using sysctl hw.vmm.vmx. What processor you are
using for these performance benchmarking?

Can you run a simple experiment, assign pptdev interrupts to core that's
not running guest/vcpu? This will reduce #VMEXIT on vcpu which we know is
expensive.

Regards,
Anish




On Wed, Jun 7, 2017 at 11:01 AM, Harry Schmalzbauer <freebsd@omnilan.de>
wrote:

>  Hello,
>
> some might have noticed my numerous posts recently, mainly in
> freebsd-net@, but all around the same story =E2=80=93 replacing ESXi. So =
I hope
> nobody minds if I ask for help again to alleviate some of my knowledge
> deficiencies about PCIePassThrough.
> As last resort for special VMs, I always used to have dedicated NICs via
> PCIePassThrough.
> But with bhyve (besides other undiscovered strange side effects) I don't
> understand the results utilizing bhyve-passthru.
>
> Simple test: Copy iso image from NFSv4 mount via 1GbE (to null).
>
> Host, using if_em (hartwell): 4-8kirqs/s (8 @mtu 1500), system idle
> ~99-100%
> Passing this same hartwell devcie to the guest, running the identical
> FreeBSD version like the host, I see 2x8kirqs/s, MTU independent, and
> only 80%idle, while almost all cycles are spent in Sys (vmm).
> Running the same guest in if_bridge(4)-vtnet(4) or vale(4)-vtnet(4)
> deliver identical results: About 80% attainable throughput, only 80%
> idle cycles.
>
> So interrupts triggerd by PCI devices, which are controlled via
> bhyve-passthru, are as expensive as interrupts triggered by emulated
> devices?
> I thought I'd save these expensive VM_Exits by using the passthru path.
> Completely wrong, is it?
>
> I haven't ever done authoritative ESXi measures, but I remember that
> there was a significant saving using VMDirectPath. Big enough that I
> never felt the need for measuring. Is there any implementation
> difference? Some kind of intermediate interrupt moderation maybe?
>
> Thanks for any hints/links,
>
> -harry
> _______________________________________________
> freebsd-virtualization@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
> To unsubscribe, send any mail to "freebsd-virtualization-
> unsubscribe@freebsd.org"



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CALnRwMRst1d_O_ix-_JaS=tH8=dPtNNkDo9WyzRH1_nBi1N6zA>