Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 04 May 2024 18:06:59 +0000
From:      bugzilla-noreply@freebsd.org
To:        virtualization@FreeBSD.org
Subject:   [Bug 278058] Simultaneous use of Bhyve AND vnet on network PCI devices causes network failure
Message-ID:  <bug-278058-27103-2fBbBhvPTk@https.bugs.freebsd.org/bugzilla/>
In-Reply-To: <bug-278058-27103@https.bugs.freebsd.org/bugzilla/>
References:  <bug-278058-27103@https.bugs.freebsd.org/bugzilla/>

next in thread | previous in thread | raw e-mail | index | archive | help

https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=278058

--- Comment #1 from Mark McBride <mark@markmcb.com> ---
I listened the the bhyve production user call [1] and noticed people still
speculating that bhyve+vnet SR-IOV issues might be PCI card specific. To
clarify, I can reproduce this issue 100% of the time on Chelsio, Intel, and
Mellanox NICs. I have reproduced it on two Supermicro motherboards (X12STH-F,
X11SSM-F) with 3 different Intel CPUs (Xeon E-2388G, Xeon E-2324G, Xeon E3-1275
v6). 

So if hardware is a consideration, the only common thread for me is Supermicro
motherboards and Intel Xeon CPUs.

It's worth noting again, if bhyve alone is used for passthrough, I have no
issues at all. It's only when the on-metal FreeBSD host passes VFs to bhyve AND
vnet jails that I observe issues.

I currently have 8 network VFs and 1 iGPU passed to 4 bhyve instances with no
problems. My workaround is to use vnet+epair for any jail networking in the
host (instead of vnet+VF).

1. Bhyve Production User Call 2024-04-25
https://www.youtube.com/watch?v=gGzpLLJTHS0

-- 
You are receiving this mail because:
You are the assignee for the bug.


Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?bug-278058-27103-2fBbBhvPTk>