From owner-svn-ports-head@freebsd.org Wed Oct 31 08:34:48 2018 Return-Path: Delivered-To: svn-ports-head@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 41F2B107DAD4; Wed, 31 Oct 2018 08:34:48 +0000 (UTC) (envelope-from royger@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id E4B2D6B141; Wed, 31 Oct 2018 08:34:47 +0000 (UTC) (envelope-from royger@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id C17681DE1; Wed, 31 Oct 2018 08:34:47 +0000 (UTC) (envelope-from royger@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id w9V8YlL8046729; Wed, 31 Oct 2018 08:34:47 GMT (envelope-from royger@FreeBSD.org) Received: (from royger@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id w9V8Ylur046727; Wed, 31 Oct 2018 08:34:47 GMT (envelope-from royger@FreeBSD.org) Message-Id: <201810310834.w9V8Ylur046727@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: royger set sender to royger@FreeBSD.org using -f From: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= Date: Wed, 31 Oct 2018 08:34:47 +0000 (UTC) To: ports-committers@freebsd.org, svn-ports-all@freebsd.org, svn-ports-head@freebsd.org Subject: svn commit: r483559 - in head/emulators/xen-kernel411: . files X-SVN-Group: ports-head X-SVN-Commit-Author: royger X-SVN-Commit-Paths: in head/emulators/xen-kernel411: . files X-SVN-Commit-Revision: 483559 X-SVN-Commit-Repository: ports MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-ports-head@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: SVN commit messages for the ports tree for head List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 31 Oct 2018 08:34:48 -0000 Author: royger (src committer) Date: Wed Oct 31 08:34:47 2018 New Revision: 483559 URL: https://svnweb.freebsd.org/changeset/ports/483559 Log: xen: add XSA-278 fix Added: head/emulators/xen-kernel411/files/xsa278-4.11.patch (contents, props changed) Modified: head/emulators/xen-kernel411/Makefile Modified: head/emulators/xen-kernel411/Makefile ============================================================================== --- head/emulators/xen-kernel411/Makefile Wed Oct 31 07:56:43 2018 (r483558) +++ head/emulators/xen-kernel411/Makefile Wed Oct 31 08:34:47 2018 (r483559) @@ -2,7 +2,7 @@ PORTNAME= xen PORTVERSION= 4.11.0 -PORTREVISION= 1 +PORTREVISION= 2 CATEGORIES= emulators MASTER_SITES= http://downloads.xenproject.org/release/xen/${PORTVERSION}/ PKGNAMESUFFIX= -kernel411 @@ -90,6 +90,8 @@ EXTRA_PATCHES+= ${FILESDIR}/0001-xen-Port-the-array_in ${FILESDIR}/0039-x86-spec-ctrl-Introduce-an-option-to-control-L1D_FLU.patch:-p1 \ ${FILESDIR}/0040-x86-Make-spec-ctrl-no-a-global-disable-of-all-mitiga.patch:-p1 \ ${FILESDIR}/0042-x86-write-to-correct-variable-in-parse_pv_l1tf.patch:-p1 +# XSA-278: x86: Nested VT-x usable even when disabled +EXTRA_PATCHES+= ${FILESDIR}/xsa278-4.11.patch:-p1 .include Added: head/emulators/xen-kernel411/files/xsa278-4.11.patch ============================================================================== --- /dev/null 00:00:00 1970 (empty, because file is newly added) +++ head/emulators/xen-kernel411/files/xsa278-4.11.patch Wed Oct 31 08:34:47 2018 (r483559) @@ -0,0 +1,326 @@ +From: Andrew Cooper +Subject: x86/vvmx: Disallow the use of VT-x instructions when nested virt is disabled + +c/s ac6a4500b "vvmx: set vmxon_region_pa of vcpu out of VMX operation to an +invalid address" was a real bugfix as described, but has a very subtle bug +which results in all VT-x instructions being usable by a guest. + +The toolstack constructs a guest by issuing: + + XEN_DOMCTL_createdomain + XEN_DOMCTL_max_vcpus + +and optionally later, HVMOP_set_param to enable nested virt. + +As a result, the call to nvmx_vcpu_initialise() in hvm_vcpu_initialise() +(which is what makes the above patch look correct during review) is actually +dead code. In practice, nvmx_vcpu_initialise() first gets called when nested +virt is enabled, which is typically never. + +As a result, the zeroed memory of struct vcpu causes nvmx_vcpu_in_vmx() to +return true before nested virt is enabled for the guest. + +Fixing the order of initialisation is a work in progress for other reasons, +but not viable for security backports. + +A compounding factor is that the vmexit handlers for all instructions, other +than VMXON, pass 0 into vmx_inst_check_privilege()'s vmxop_check parameter, +which skips the CR4.VMXE check. (This is one of many reasons why nested virt +isn't a supported feature yet.) + +However, the overall result is that when nested virt is not enabled by the +toolstack (i.e. the default configuration for all production guests), the VT-x +instructions (other than VMXON) are actually usable, and Xen very quickly +falls over the fact that the nvmx structure is uninitialised. + +In order to fail safe in the supported case, re-implement all the VT-x +instruction handling using a single function with a common prologue, covering +all the checks which should cause #UD or #GP faults. This deliberately +doesn't use any state from the nvmx structure, in case there are other lurking +issues. + +This is XSA-278 + +Reported-by: Sergey Dyasli +Signed-off-by: Andrew Cooper +Reviewed-by: Sergey Dyasli + +diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c +index a6415f0..a4d2829 100644 +--- a/xen/arch/x86/hvm/vmx/vmx.c ++++ b/xen/arch/x86/hvm/vmx/vmx.c +@@ -3982,57 +3982,17 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs) + break; + + case EXIT_REASON_VMXOFF: +- if ( nvmx_handle_vmxoff(regs) == X86EMUL_OKAY ) +- update_guest_eip(); +- break; +- + case EXIT_REASON_VMXON: +- if ( nvmx_handle_vmxon(regs) == X86EMUL_OKAY ) +- update_guest_eip(); +- break; +- + case EXIT_REASON_VMCLEAR: +- if ( nvmx_handle_vmclear(regs) == X86EMUL_OKAY ) +- update_guest_eip(); +- break; +- + case EXIT_REASON_VMPTRLD: +- if ( nvmx_handle_vmptrld(regs) == X86EMUL_OKAY ) +- update_guest_eip(); +- break; +- + case EXIT_REASON_VMPTRST: +- if ( nvmx_handle_vmptrst(regs) == X86EMUL_OKAY ) +- update_guest_eip(); +- break; +- + case EXIT_REASON_VMREAD: +- if ( nvmx_handle_vmread(regs) == X86EMUL_OKAY ) +- update_guest_eip(); +- break; +- + case EXIT_REASON_VMWRITE: +- if ( nvmx_handle_vmwrite(regs) == X86EMUL_OKAY ) +- update_guest_eip(); +- break; +- + case EXIT_REASON_VMLAUNCH: +- if ( nvmx_handle_vmlaunch(regs) == X86EMUL_OKAY ) +- update_guest_eip(); +- break; +- + case EXIT_REASON_VMRESUME: +- if ( nvmx_handle_vmresume(regs) == X86EMUL_OKAY ) +- update_guest_eip(); +- break; +- + case EXIT_REASON_INVEPT: +- if ( nvmx_handle_invept(regs) == X86EMUL_OKAY ) +- update_guest_eip(); +- break; +- + case EXIT_REASON_INVVPID: +- if ( nvmx_handle_invvpid(regs) == X86EMUL_OKAY ) ++ if ( nvmx_handle_vmx_insn(regs, exit_reason) == X86EMUL_OKAY ) + update_guest_eip(); + break; + +diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c +index e97db33..88cb58c 100644 +--- a/xen/arch/x86/hvm/vmx/vvmx.c ++++ b/xen/arch/x86/hvm/vmx/vvmx.c +@@ -1470,7 +1470,7 @@ void nvmx_switch_guest(void) + * VMX instructions handling + */ + +-int nvmx_handle_vmxon(struct cpu_user_regs *regs) ++static int nvmx_handle_vmxon(struct cpu_user_regs *regs) + { + struct vcpu *v=current; + struct nestedvmx *nvmx = &vcpu_2_nvmx(v); +@@ -1522,7 +1522,7 @@ int nvmx_handle_vmxon(struct cpu_user_regs *regs) + return X86EMUL_OKAY; + } + +-int nvmx_handle_vmxoff(struct cpu_user_regs *regs) ++static int nvmx_handle_vmxoff(struct cpu_user_regs *regs) + { + struct vcpu *v=current; + struct nestedvmx *nvmx = &vcpu_2_nvmx(v); +@@ -1611,7 +1611,7 @@ static int nvmx_vmresume(struct vcpu *v, struct cpu_user_regs *regs) + return X86EMUL_OKAY; + } + +-int nvmx_handle_vmresume(struct cpu_user_regs *regs) ++static int nvmx_handle_vmresume(struct cpu_user_regs *regs) + { + bool_t launched; + struct vcpu *v = current; +@@ -1645,7 +1645,7 @@ int nvmx_handle_vmresume(struct cpu_user_regs *regs) + return nvmx_vmresume(v,regs); + } + +-int nvmx_handle_vmlaunch(struct cpu_user_regs *regs) ++static int nvmx_handle_vmlaunch(struct cpu_user_regs *regs) + { + bool_t launched; + struct vcpu *v = current; +@@ -1688,7 +1688,7 @@ int nvmx_handle_vmlaunch(struct cpu_user_regs *regs) + return rc; + } + +-int nvmx_handle_vmptrld(struct cpu_user_regs *regs) ++static int nvmx_handle_vmptrld(struct cpu_user_regs *regs) + { + struct vcpu *v = current; + struct vmx_inst_decoded decode; +@@ -1759,7 +1759,7 @@ int nvmx_handle_vmptrld(struct cpu_user_regs *regs) + return X86EMUL_OKAY; + } + +-int nvmx_handle_vmptrst(struct cpu_user_regs *regs) ++static int nvmx_handle_vmptrst(struct cpu_user_regs *regs) + { + struct vcpu *v = current; + struct vmx_inst_decoded decode; +@@ -1784,7 +1784,7 @@ int nvmx_handle_vmptrst(struct cpu_user_regs *regs) + return X86EMUL_OKAY; + } + +-int nvmx_handle_vmclear(struct cpu_user_regs *regs) ++static int nvmx_handle_vmclear(struct cpu_user_regs *regs) + { + struct vcpu *v = current; + struct vmx_inst_decoded decode; +@@ -1836,7 +1836,7 @@ int nvmx_handle_vmclear(struct cpu_user_regs *regs) + return X86EMUL_OKAY; + } + +-int nvmx_handle_vmread(struct cpu_user_regs *regs) ++static int nvmx_handle_vmread(struct cpu_user_regs *regs) + { + struct vcpu *v = current; + struct vmx_inst_decoded decode; +@@ -1878,7 +1878,7 @@ int nvmx_handle_vmread(struct cpu_user_regs *regs) + return X86EMUL_OKAY; + } + +-int nvmx_handle_vmwrite(struct cpu_user_regs *regs) ++static int nvmx_handle_vmwrite(struct cpu_user_regs *regs) + { + struct vcpu *v = current; + struct vmx_inst_decoded decode; +@@ -1926,7 +1926,7 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs) + return X86EMUL_OKAY; + } + +-int nvmx_handle_invept(struct cpu_user_regs *regs) ++static int nvmx_handle_invept(struct cpu_user_regs *regs) + { + struct vmx_inst_decoded decode; + unsigned long eptp; +@@ -1954,7 +1954,7 @@ int nvmx_handle_invept(struct cpu_user_regs *regs) + return X86EMUL_OKAY; + } + +-int nvmx_handle_invvpid(struct cpu_user_regs *regs) ++static int nvmx_handle_invvpid(struct cpu_user_regs *regs) + { + struct vmx_inst_decoded decode; + unsigned long vpid; +@@ -1980,6 +1980,81 @@ int nvmx_handle_invvpid(struct cpu_user_regs *regs) + return X86EMUL_OKAY; + } + ++int nvmx_handle_vmx_insn(struct cpu_user_regs *regs, unsigned int exit_reason) ++{ ++ struct vcpu *curr = current; ++ int ret; ++ ++ if ( !(curr->arch.hvm_vcpu.guest_cr[4] & X86_CR4_VMXE) || ++ !nestedhvm_enabled(curr->domain) || ++ (vmx_guest_x86_mode(curr) < (hvm_long_mode_active(curr) ? 8 : 2)) ) ++ { ++ hvm_inject_hw_exception(TRAP_invalid_op, X86_EVENT_NO_EC); ++ return X86EMUL_EXCEPTION; ++ } ++ ++ if ( vmx_get_cpl() > 0 ) ++ { ++ hvm_inject_hw_exception(TRAP_gp_fault, 0); ++ return X86EMUL_EXCEPTION; ++ } ++ ++ switch ( exit_reason ) ++ { ++ case EXIT_REASON_VMXOFF: ++ ret = nvmx_handle_vmxoff(regs); ++ break; ++ ++ case EXIT_REASON_VMXON: ++ ret = nvmx_handle_vmxon(regs); ++ break; ++ ++ case EXIT_REASON_VMCLEAR: ++ ret = nvmx_handle_vmclear(regs); ++ break; ++ ++ case EXIT_REASON_VMPTRLD: ++ ret = nvmx_handle_vmptrld(regs); ++ break; ++ ++ case EXIT_REASON_VMPTRST: ++ ret = nvmx_handle_vmptrst(regs); ++ break; ++ ++ case EXIT_REASON_VMREAD: ++ ret = nvmx_handle_vmread(regs); ++ break; ++ ++ case EXIT_REASON_VMWRITE: ++ ret = nvmx_handle_vmwrite(regs); ++ break; ++ ++ case EXIT_REASON_VMLAUNCH: ++ ret = nvmx_handle_vmlaunch(regs); ++ break; ++ ++ case EXIT_REASON_VMRESUME: ++ ret = nvmx_handle_vmresume(regs); ++ break; ++ ++ case EXIT_REASON_INVEPT: ++ ret = nvmx_handle_invept(regs); ++ break; ++ ++ case EXIT_REASON_INVVPID: ++ ret = nvmx_handle_invvpid(regs); ++ break; ++ ++ default: ++ ASSERT_UNREACHABLE(); ++ domain_crash(curr->domain); ++ ret = X86EMUL_UNHANDLEABLE; ++ break; ++ } ++ ++ return ret; ++} ++ + #define __emul_value(enable1, default1) \ + ((enable1 | default1) << 32 | (default1)) + +diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h +index 9ea35eb..fc4a8d1 100644 +--- a/xen/include/asm-x86/hvm/vmx/vvmx.h ++++ b/xen/include/asm-x86/hvm/vmx/vvmx.h +@@ -94,9 +94,6 @@ void nvmx_domain_relinquish_resources(struct domain *d); + + bool_t nvmx_ept_enabled(struct vcpu *v); + +-int nvmx_handle_vmxon(struct cpu_user_regs *regs); +-int nvmx_handle_vmxoff(struct cpu_user_regs *regs); +- + #define EPT_TRANSLATE_SUCCEED 0 + #define EPT_TRANSLATE_VIOLATION 1 + #define EPT_TRANSLATE_MISCONFIG 2 +@@ -191,15 +188,7 @@ enum vmx_insn_errno set_vvmcs_real_safe(const struct vcpu *, u32 encoding, + uint64_t get_shadow_eptp(struct vcpu *v); + + void nvmx_destroy_vmcs(struct vcpu *v); +-int nvmx_handle_vmptrld(struct cpu_user_regs *regs); +-int nvmx_handle_vmptrst(struct cpu_user_regs *regs); +-int nvmx_handle_vmclear(struct cpu_user_regs *regs); +-int nvmx_handle_vmread(struct cpu_user_regs *regs); +-int nvmx_handle_vmwrite(struct cpu_user_regs *regs); +-int nvmx_handle_vmresume(struct cpu_user_regs *regs); +-int nvmx_handle_vmlaunch(struct cpu_user_regs *regs); +-int nvmx_handle_invept(struct cpu_user_regs *regs); +-int nvmx_handle_invvpid(struct cpu_user_regs *regs); ++int nvmx_handle_vmx_insn(struct cpu_user_regs *regs, unsigned int exit_reason); + int nvmx_msr_read_intercept(unsigned int msr, + u64 *msr_content); +