From owner-freebsd-virtualization@freebsd.org Wed Jun 7 18:01:16 2017 Return-Path: Delivered-To: freebsd-virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 29DD7D14BBB for ; Wed, 7 Jun 2017 18:01:16 +0000 (UTC) (envelope-from freebsd@omnilan.de) Received: from mx0.gentlemail.de (mx0.gentlemail.de [IPv6:2a00:e10:2800::a130]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id BD56A7CFE8 for ; Wed, 7 Jun 2017 18:01:15 +0000 (UTC) (envelope-from freebsd@omnilan.de) Received: from mh0.gentlemail.de (ezra.dcm1.omnilan.net [78.138.80.135]) by mx0.gentlemail.de (8.14.5/8.14.5) with ESMTP id v57I1DEP084194 for ; Wed, 7 Jun 2017 20:01:13 +0200 (CEST) (envelope-from freebsd@omnilan.de) Received: from titan.inop.mo1.omnilan.net (s1.omnilan.de [217.91.127.234]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mh0.gentlemail.de (Postfix) with ESMTPSA id 85433B6D; Wed, 7 Jun 2017 20:01:13 +0200 (CEST) Message-ID: <59383F5C.8020801@omnilan.de> Date: Wed, 07 Jun 2017 20:01:00 +0200 From: Harry Schmalzbauer Organization: OmniLAN User-Agent: Mozilla/5.0 (X11; U; FreeBSD i386; de-DE; rv:1.9.2.8) Gecko/20100906 Lightning/1.0b2 Thunderbird/3.1.2 MIME-Version: 1.0 To: freebsd-virtualization@freebsd.org Subject: PCIe passthrough really that expensive? Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Greylist: ACL 129 matched, not delayed by milter-greylist-4.2.7 (mx0.gentlemail.de [78.138.80.130]); Wed, 07 Jun 2017 20:01:13 +0200 (CEST) X-Milter: Spamilter (Reciever: mx0.gentlemail.de; Sender-ip: 78.138.80.135; Sender-helo: mh0.gentlemail.de; ) X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 07 Jun 2017 18:01:16 -0000 Hello, some might have noticed my numerous posts recently, mainly in freebsd-net@, but all around the same story – replacing ESXi. So I hope nobody minds if I ask for help again to alleviate some of my knowledge deficiencies about PCIePassThrough. As last resort for special VMs, I always used to have dedicated NICs via PCIePassThrough. But with bhyve (besides other undiscovered strange side effects) I don't understand the results utilizing bhyve-passthru. Simple test: Copy iso image from NFSv4 mount via 1GbE (to null). Host, using if_em (hartwell): 4-8kirqs/s (8 @mtu 1500), system idle ~99-100% Passing this same hartwell devcie to the guest, running the identical FreeBSD version like the host, I see 2x8kirqs/s, MTU independent, and only 80%idle, while almost all cycles are spent in Sys (vmm). Running the same guest in if_bridge(4)-vtnet(4) or vale(4)-vtnet(4) deliver identical results: About 80% attainable throughput, only 80% idle cycles. So interrupts triggerd by PCI devices, which are controlled via bhyve-passthru, are as expensive as interrupts triggered by emulated devices? I thought I'd save these expensive VM_Exits by using the passthru path. Completely wrong, is it? I haven't ever done authoritative ESXi measures, but I remember that there was a significant saving using VMDirectPath. Big enough that I never felt the need for measuring. Is there any implementation difference? Some kind of intermediate interrupt moderation maybe? Thanks for any hints/links, -harry