From owner-svn-src-stable-10@freebsd.org Wed Jul 1 19:46:59 2015 Return-Path: Delivered-To: svn-src-stable-10@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 99B06991E2F; Wed, 1 Jul 2015 19:46:59 +0000 (UTC) (envelope-from neel@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 7DD00114D; Wed, 1 Jul 2015 19:46:59 +0000 (UTC) (envelope-from neel@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.70]) by repo.freebsd.org (8.14.9/8.14.9) with ESMTP id t61Jkxvj021339; Wed, 1 Jul 2015 19:46:59 GMT (envelope-from neel@FreeBSD.org) Received: (from neel@localhost) by repo.freebsd.org (8.14.9/8.14.9/Submit) id t61Jkwdp021335; Wed, 1 Jul 2015 19:46:58 GMT (envelope-from neel@FreeBSD.org) Message-Id: <201507011946.t61Jkwdp021335@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: neel set sender to neel@FreeBSD.org using -f From: Neel Natu Date: Wed, 1 Jul 2015 19:46:58 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-stable@freebsd.org, svn-src-stable-10@freebsd.org Subject: svn commit: r285015 - stable/10/sys/amd64/vmm/amd X-SVN-Group: stable-10 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-stable-10@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: SVN commit messages for only the 10-stable src tree List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 01 Jul 2015 19:46:59 -0000 Author: neel Date: Wed Jul 1 19:46:57 2015 New Revision: 285015 URL: https://svnweb.freebsd.org/changeset/base/285015 Log: MFC r284712: Restore the host's GS.base before returning from 'svm_launch()' so the Dtrace FBT provider works with vmm.ko on AMD. Modified: stable/10/sys/amd64/vmm/amd/svm.c stable/10/sys/amd64/vmm/amd/svm.h stable/10/sys/amd64/vmm/amd/svm_genassym.c stable/10/sys/amd64/vmm/amd/svm_support.S Directory Properties: stable/10/ (props changed) Modified: stable/10/sys/amd64/vmm/amd/svm.c ============================================================================== --- stable/10/sys/amd64/vmm/amd/svm.c Wed Jul 1 17:27:44 2015 (r285014) +++ stable/10/sys/amd64/vmm/amd/svm.c Wed Jul 1 19:46:57 2015 (r285015) @@ -1916,7 +1916,6 @@ svm_vmrun(void *arg, int vcpu, register_ struct vlapic *vlapic; struct vm *vm; uint64_t vmcb_pa; - u_int thiscpu; int handled; svm_sc = arg; @@ -1928,19 +1927,10 @@ svm_vmrun(void *arg, int vcpu, register_ vmexit = vm_exitinfo(vm, vcpu); vlapic = vm_lapic(vm, vcpu); - /* - * Stash 'curcpu' on the stack as 'thiscpu'. - * - * The per-cpu data area is not accessible until MSR_GSBASE is restored - * after the #VMEXIT. Since VMRUN is executed inside a critical section - * 'curcpu' and 'thiscpu' are guaranteed to identical. - */ - thiscpu = curcpu; - gctx = svm_get_guest_regctx(svm_sc, vcpu); vmcb_pa = svm_sc->vcpu[vcpu].vmcb_pa; - if (vcpustate->lastcpu != thiscpu) { + if (vcpustate->lastcpu != curcpu) { /* * Force new ASID allocation by invalidating the generation. */ @@ -1961,7 +1951,7 @@ svm_vmrun(void *arg, int vcpu, register_ * This works for now but any new side-effects of vcpu * migration should take this case into account. */ - vcpustate->lastcpu = thiscpu; + vcpustate->lastcpu = curcpu; vmm_stat_incr(vm, vcpu, VCPU_MIGRATIONS, 1); } @@ -2007,14 +1997,14 @@ svm_vmrun(void *arg, int vcpu, register_ svm_inj_interrupts(svm_sc, vcpu, vlapic); - /* Activate the nested pmap on 'thiscpu' */ - CPU_SET_ATOMIC_ACQ(thiscpu, &pmap->pm_active); + /* Activate the nested pmap on 'curcpu' */ + CPU_SET_ATOMIC_ACQ(curcpu, &pmap->pm_active); /* * Check the pmap generation and the ASID generation to * ensure that the vcpu does not use stale TLB mappings. */ - check_asid(svm_sc, vcpu, pmap, thiscpu); + check_asid(svm_sc, vcpu, pmap, curcpu); ctrl->vmcb_clean = vmcb_clean & ~vcpustate->dirty; vcpustate->dirty = 0; @@ -2022,23 +2012,9 @@ svm_vmrun(void *arg, int vcpu, register_ /* Launch Virtual Machine. */ VCPU_CTR1(vm, vcpu, "Resume execution at %#lx", state->rip); - svm_launch(vmcb_pa, gctx); - - CPU_CLR_ATOMIC(thiscpu, &pmap->pm_active); + svm_launch(vmcb_pa, gctx, &__pcpu[curcpu]); - /* - * Restore MSR_GSBASE to point to the pcpu data area. - * - * Note that accesses done via PCPU_GET/PCPU_SET will work - * only after MSR_GSBASE is restored. - * - * Also note that we don't bother restoring MSR_KGSBASE - * since it is not used in the kernel and will be restored - * when the VMRUN ioctl returns to userspace. - */ - wrmsr(MSR_GSBASE, (uint64_t)&__pcpu[thiscpu]); - KASSERT(curcpu == thiscpu, ("thiscpu/curcpu (%u/%u) mismatch", - thiscpu, curcpu)); + CPU_CLR_ATOMIC(curcpu, &pmap->pm_active); /* * The host GDTR and IDTR is saved by VMRUN and restored Modified: stable/10/sys/amd64/vmm/amd/svm.h ============================================================================== --- stable/10/sys/amd64/vmm/amd/svm.h Wed Jul 1 17:27:44 2015 (r285014) +++ stable/10/sys/amd64/vmm/amd/svm.h Wed Jul 1 19:46:57 2015 (r285015) @@ -29,6 +29,8 @@ #ifndef _SVM_H_ #define _SVM_H_ +struct pcpu; + /* * Guest register state that is saved outside the VMCB. */ @@ -49,6 +51,6 @@ struct svm_regctx { register_t sctx_r15; }; -void svm_launch(uint64_t pa, struct svm_regctx *); +void svm_launch(uint64_t pa, struct svm_regctx *gctx, struct pcpu *pcpu); #endif /* _SVM_H_ */ Modified: stable/10/sys/amd64/vmm/amd/svm_genassym.c ============================================================================== --- stable/10/sys/amd64/vmm/amd/svm_genassym.c Wed Jul 1 17:27:44 2015 (r285014) +++ stable/10/sys/amd64/vmm/amd/svm_genassym.c Wed Jul 1 19:46:57 2015 (r285015) @@ -29,6 +29,7 @@ __FBSDID("$FreeBSD$"); #include #include +#include #include "svm.h" @@ -46,3 +47,4 @@ ASSYM(SCTX_R12, offsetof(struct svm_regc ASSYM(SCTX_R13, offsetof(struct svm_regctx, sctx_r13)); ASSYM(SCTX_R14, offsetof(struct svm_regctx, sctx_r14)); ASSYM(SCTX_R15, offsetof(struct svm_regctx, sctx_r15)); +ASSYM(MSR_GSBASE, MSR_GSBASE); Modified: stable/10/sys/amd64/vmm/amd/svm_support.S ============================================================================== --- stable/10/sys/amd64/vmm/amd/svm_support.S Wed Jul 1 17:27:44 2015 (r285014) +++ stable/10/sys/amd64/vmm/amd/svm_support.S Wed Jul 1 19:46:57 2015 (r285015) @@ -42,13 +42,17 @@ #define VMSAVE .byte 0x0f, 0x01, 0xdb /* - * svm_launch(uint64_t vmcb, struct svm_regctx *gctx) + * svm_launch(uint64_t vmcb, struct svm_regctx *gctx, struct pcpu *pcpu) * %rdi: physical address of VMCB * %rsi: pointer to guest context + * %rdx: pointer to the pcpu data */ ENTRY(svm_launch) VENTER + /* save pointer to the pcpu data */ + push %rdx + /* * Host register state saved across a VMRUN. * @@ -116,6 +120,13 @@ ENTRY(svm_launch) pop %r12 pop %rbx + /* Restore %GS.base to point to the host's pcpu data */ + pop %rdx + mov %edx, %eax + shr $32, %rdx + mov $MSR_GSBASE, %ecx + wrmsr + VLEAVE ret END(svm_launch)