From owner-freebsd-virtualization@freebsd.org Wed Aug 8 18:10:34 2018 Return-Path: Delivered-To: freebsd-virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 7BF231065D43 for ; Wed, 8 Aug 2018 18:10:34 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from mailman.ysv.freebsd.org (mailman.ysv.freebsd.org [IPv6:2001:1900:2254:206a::50:5]) by mx1.freebsd.org (Postfix) with ESMTP id 19E028B587 for ; Wed, 8 Aug 2018 18:10:34 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: by mailman.ysv.freebsd.org (Postfix) id D1D061065D41; Wed, 8 Aug 2018 18:10:33 +0000 (UTC) Delivered-To: virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id C074A1065D40 for ; Wed, 8 Aug 2018 18:10:33 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from mxrelay.ysv.freebsd.org (mxrelay.ysv.freebsd.org [IPv6:2001:1900:2254:206a::19:3]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mxrelay.ysv.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 625BA8B584 for ; Wed, 8 Aug 2018 18:10:33 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.ysv.freebsd.org (Postfix) with ESMTPS id A968EEC2C for ; Wed, 8 Aug 2018 18:10:32 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id w78IAWlN030471 for ; Wed, 8 Aug 2018 18:10:32 GMT (envelope-from bugzilla-noreply@freebsd.org) Received: (from www@localhost) by kenobi.freebsd.org (8.15.2/8.15.2/Submit) id w78IAWAI030461 for virtualization@FreeBSD.org; Wed, 8 Aug 2018 18:10:32 GMT (envelope-from bugzilla-noreply@freebsd.org) X-Authentication-Warning: kenobi.freebsd.org: www set sender to bugzilla-noreply@freebsd.org using -f From: bugzilla-noreply@freebsd.org To: virtualization@FreeBSD.org Subject: [Bug 229824] Fatal trap 1 when resuming from S3 with a VirtualBox VM running Date: Wed, 08 Aug 2018 18:10:32 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: misc X-Bugzilla-Version: CURRENT X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: jhb@FreeBSD.org X-Bugzilla-Status: New X-Bugzilla-Resolution: X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: virtualization@FreeBSD.org X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.27 Precedence: list List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Aug 2018 18:10:34 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D229824 John Baldwin changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |jhb@FreeBSD.org --- Comment #1 from John Baldwin --- FreeBSD doesn't have a way to let external hypervisors like vbox work across suspend and resume. I did add a hook for bhyve in https://svnweb.freebsd.org/base?view=3Drevision&revision=3D259782. We woul= d need something similar. The same issue matters for permitting multiple hypervis= ors being active at the same time (e.g. you can't run both bhyve and vbox at the same time currently). I had been thinking of adding a kind of hypervisor framework to let hypervisors allocate the VMX region and then permit associating it with a given process so that you could do the right vmxon/vm= xoff during context switch. Having that would also allow us to more cleanly han= dle suspend/resume for arbitrary hypervisors. One thing you might be able to do for now is change the vbox driver to set = the same vmm_resume_p pointer that bhyve's vmm.ko sets during MOD_LOAD to a function that reinvokes vmxon with the right address on each CPU during res= ume. Probably both bhyve and vbox should also fail to load in MOD_LOAD if that pointer is already non-NULL which would enforce only one could be used at a time. --=20 You are receiving this mail because: You are the assignee for the bug.=