From nobody Tue Aug 20 09:02:18 2024 X-Original-To: dev-commits-src-main@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4Wp3Pk72bLz5Shs8; Tue, 20 Aug 2024 09:02:18 +0000 (UTC) (envelope-from git@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "R11" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4Wp3Pk657qz4v8h; Tue, 20 Aug 2024 09:02:18 +0000 (UTC) (envelope-from git@FreeBSD.org) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=freebsd.org; s=dkim; t=1724144538; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=4fOho7UHMJRxCET7MK4PZ4NaJykcDNEgMY585Li4Lpw=; b=U+005wzGOAZY7tQZ310tyfaTf+kvKMWLhpJGlFVGdvE/UksVrtpcgxuvj1++LwBlooV2Xi XQl2xxuCAqQsYlmSN2npt8Qg6jX2Epwqeg793ITnxs4Iw8YSCSGMzAsJg4weZQ2+k2cSET hAH9Lp9TD+bKe33RztHi5em2SrT9hFd5NHVypwHORnYXDz4dSO0KlHJBBHXaijWW9t2ydn 3g/2zf2ye8+0hu/46Q9JgUh1H5n1ThYK8jzwurh2kAUqjTCs6NGNhpICdl8+E8rBINdLca EqffMQT/6sBvUT6llf13bpkdkkPcBJfyUSBsci7JGjo/JvBiBqI9l18WOfFloA== ARC-Seal: i=1; s=dkim; d=freebsd.org; t=1724144538; a=rsa-sha256; cv=none; b=OIStk85VyHUhfmiS3n9W5m9BYzz2l3/vORRVI1xgLxCSpgVsqvEzaTAaYd+ACUV5H4LF1g hnAHHzYLSwo1nzW0whIR4x/W7AluzwT83R3ICkH26/z/8+x5C8+GBBB1Q0aqRSXkRsxEgH TF7oDiu0VfItqiW92dQcWkARz9NiH/No5ka++yWWo3PAYJxPYrJcuo9bGZId1Eht44a6ju ewt8ZUUp0b2Q3XTypKG4J8XgfzxgF+DrJwH2Cc7uUT4VIZ1ig9hqd/VxFH5zGeD5DqRtHP 45c4DowwqtGPdAyYcIWepSpEO9IKK6CjyAcSfSEpYN/eWMMToO9vOcwAmQqg3g== ARC-Authentication-Results: i=1; mx1.freebsd.org; none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=freebsd.org; s=dkim; t=1724144538; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=4fOho7UHMJRxCET7MK4PZ4NaJykcDNEgMY585Li4Lpw=; b=RsXCGK0297AJDWEaGwfFjJwhiK6O+wNeosbseSMAYUCUTq415gmZfZ1G393IPL/3QJRdCC r9NxviUOA6jOapZZGwZpfMSPjY+6aKiNgBwpP46no6Eu4pDuP184VhjBBL1JVfG4PH6xSm iEshRy8luEpSHqUIxhuFvQhAv8ab2PyYmJTl0p6TYhL9f/370RpZssEHqiL9QgdtJfwOP1 WXqaEFahpxzTP3MmTVaIui7arHhf7mPZM5Nqffpz9wXqr46GuU5wfCIG/uhHQspgbCi1f7 5hIa/8VbRv1yoWGdbnoMWhV4Dpf14ZphvakJlNHpDMgO+B9tpkhIfftHLAg/bQ== Received: from gitrepo.freebsd.org (gitrepo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 4Wp3Pk5XcKztm3; Tue, 20 Aug 2024 09:02:18 +0000 (UTC) (envelope-from git@FreeBSD.org) Received: from gitrepo.freebsd.org ([127.0.1.44]) by gitrepo.freebsd.org (8.18.1/8.18.1) with ESMTP id 47K92ID5081140; Tue, 20 Aug 2024 09:02:18 GMT (envelope-from git@gitrepo.freebsd.org) Received: (from git@localhost) by gitrepo.freebsd.org (8.18.1/8.18.1/Submit) id 47K92I38081128; Tue, 20 Aug 2024 09:02:18 GMT (envelope-from git) Date: Tue, 20 Aug 2024 09:02:18 GMT Message-Id: <202408200902.47K92I38081128@gitrepo.freebsd.org> To: src-committers@FreeBSD.org, dev-commits-src-all@FreeBSD.org, dev-commits-src-main@FreeBSD.org From: Andrew Turner Subject: git: 387f878aa7af - main - arm64/vmm: Teach vmm_arm.c about VHE List-Id: Commit messages for the main branch of the src repository List-Archive: https://lists.freebsd.org/archives/dev-commits-src-main List-Help: List-Post: List-Subscribe: List-Unsubscribe: X-BeenThere: dev-commits-src-main@freebsd.org Sender: owner-dev-commits-src-main@FreeBSD.org MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit X-Git-Committer: andrew X-Git-Repository: src X-Git-Refname: refs/heads/main X-Git-Reftype: branch X-Git-Commit: 387f878aa7afdc48cdd304a9c2f5e6806639f6f0 Auto-Submitted: auto-generated The branch main has been updated by andrew: URL: https://cgit.FreeBSD.org/src/commit/?id=387f878aa7afdc48cdd304a9c2f5e6806639f6f0 commit 387f878aa7afdc48cdd304a9c2f5e6806639f6f0 Author: Andrew Turner AuthorDate: 2024-08-19 12:43:46 +0000 Commit: Andrew Turner CommitDate: 2024-08-20 08:49:15 +0000 arm64/vmm: Teach vmm_arm.c about VHE Most of the code is identical however some, e.g. managing EL2 memory or setting EL2 registers, are unneeded under VHE as the kernel is in EL2 so can manage these directly. Sponsored by: Arm Ltd Differential Revision: https://reviews.freebsd.org/D46076 --- sys/arm64/vmm/vmm_arm64.c | 224 +++++++++++++++++++++++++--------------------- sys/arm64/vmm/vmm_reset.c | 8 +- 2 files changed, 127 insertions(+), 105 deletions(-) diff --git a/sys/arm64/vmm/vmm_arm64.c b/sys/arm64/vmm/vmm_arm64.c index 1b73ed019fad..3079353668e3 100644 --- a/sys/arm64/vmm/vmm_arm64.c +++ b/sys/arm64/vmm/vmm_arm64.c @@ -128,20 +128,6 @@ arm_setup_vectors(void *arg) el2_regs = arg; arm64_set_active_vcpu(NULL); - daif = intr_disable(); - - /* - * Install the temporary vectors which will be responsible for - * initializing the VMM when we next trap into EL2. - * - * x0: the exception vector table responsible for hypervisor - * initialization on the next call. - */ - vmm_call_hyp(vtophys(&vmm_hyp_code)); - - /* Create and map the hypervisor stack */ - stack_top = stack_hyp_va[PCPU_GET(cpuid)] + VMM_STACK_SIZE; - /* * Configure the system control register for EL2: * @@ -159,9 +145,27 @@ arm_setup_vectors(void *arg) sctlr_el2 |= SCTLR_EL2_WXN; sctlr_el2 &= ~SCTLR_EL2_EE; - /* Special call to initialize EL2 */ - vmm_call_hyp(vmmpmap_to_ttbr0(), stack_top, el2_regs->tcr_el2, - sctlr_el2, el2_regs->vtcr_el2); + daif = intr_disable(); + + if (in_vhe()) { + WRITE_SPECIALREG(vtcr_el2, el2_regs->vtcr_el2); + } else { + /* + * Install the temporary vectors which will be responsible for + * initializing the VMM when we next trap into EL2. + * + * x0: the exception vector table responsible for hypervisor + * initialization on the next call. + */ + vmm_call_hyp(vtophys(&vmm_hyp_code)); + + /* Create and map the hypervisor stack */ + stack_top = stack_hyp_va[PCPU_GET(cpuid)] + VMM_STACK_SIZE; + + /* Special call to initialize EL2 */ + vmm_call_hyp(vmmpmap_to_ttbr0(), stack_top, el2_regs->tcr_el2, + sctlr_el2, el2_regs->vtcr_el2); + } intr_restore(daif); } @@ -280,10 +284,12 @@ vmmops_modinit(int ipinum) } pa_range_bits = pa_range_field >> ID_AA64MMFR0_PARange_SHIFT; - /* Initialise the EL2 MMU */ - if (!vmmpmap_init()) { - printf("vmm: Failed to init the EL2 MMU\n"); - return (ENOMEM); + if (!in_vhe()) { + /* Initialise the EL2 MMU */ + if (!vmmpmap_init()) { + printf("vmm: Failed to init the EL2 MMU\n"); + return (ENOMEM); + } } /* Set up the stage 2 pmap callbacks */ @@ -292,55 +298,58 @@ vmmops_modinit(int ipinum) pmap_stage2_invalidate_range = vmm_s2_tlbi_range; pmap_stage2_invalidate_all = vmm_s2_tlbi_all; - /* - * Create an allocator for the virtual address space used by EL2. - * EL2 code is identity-mapped; the allocator is used to find space for - * VM structures. - */ - el2_mem_alloc = vmem_create("VMM EL2", 0, 0, PAGE_SIZE, 0, M_WAITOK); - - /* Create the mappings for the hypervisor translation table. */ - hyp_code_len = round_page(&vmm_hyp_code_end - &vmm_hyp_code); - - /* We need an physical identity mapping for when we activate the MMU */ - hyp_code_base = vmm_base = vtophys(&vmm_hyp_code); - rv = vmmpmap_enter(vmm_base, hyp_code_len, vmm_base, - VM_PROT_READ | VM_PROT_EXECUTE); - MPASS(rv); - - next_hyp_va = roundup2(vmm_base + hyp_code_len, L2_SIZE); - - /* Create a per-CPU hypervisor stack */ - CPU_FOREACH(cpu) { - stack[cpu] = malloc(VMM_STACK_SIZE, M_HYP, M_WAITOK | M_ZERO); - stack_hyp_va[cpu] = next_hyp_va; - - for (i = 0; i < VMM_STACK_PAGES; i++) { - rv = vmmpmap_enter(stack_hyp_va[cpu] + ptoa(i), - PAGE_SIZE, vtophys(stack[cpu] + ptoa(i)), - VM_PROT_READ | VM_PROT_WRITE); - MPASS(rv); + if (!in_vhe()) { + /* + * Create an allocator for the virtual address space used by + * EL2. EL2 code is identity-mapped; the allocator is used to + * find space for VM structures. + */ + el2_mem_alloc = vmem_create("VMM EL2", 0, 0, PAGE_SIZE, 0, + M_WAITOK); + + /* Create the mappings for the hypervisor translation table. */ + hyp_code_len = round_page(&vmm_hyp_code_end - &vmm_hyp_code); + + /* We need an physical identity mapping for when we activate the MMU */ + hyp_code_base = vmm_base = vtophys(&vmm_hyp_code); + rv = vmmpmap_enter(vmm_base, hyp_code_len, vmm_base, + VM_PROT_READ | VM_PROT_EXECUTE); + MPASS(rv); + + next_hyp_va = roundup2(vmm_base + hyp_code_len, L2_SIZE); + + /* Create a per-CPU hypervisor stack */ + CPU_FOREACH(cpu) { + stack[cpu] = malloc(VMM_STACK_SIZE, M_HYP, M_WAITOK | M_ZERO); + stack_hyp_va[cpu] = next_hyp_va; + + for (i = 0; i < VMM_STACK_PAGES; i++) { + rv = vmmpmap_enter(stack_hyp_va[cpu] + ptoa(i), + PAGE_SIZE, vtophys(stack[cpu] + ptoa(i)), + VM_PROT_READ | VM_PROT_WRITE); + MPASS(rv); + } + next_hyp_va += L2_SIZE; } - next_hyp_va += L2_SIZE; - } - el2_regs.tcr_el2 = TCR_EL2_RES1; - el2_regs.tcr_el2 |= min(pa_range_bits << TCR_EL2_PS_SHIFT, - TCR_EL2_PS_52BITS); - el2_regs.tcr_el2 |= TCR_EL2_T0SZ(64 - EL2_VIRT_BITS); - el2_regs.tcr_el2 |= TCR_EL2_IRGN0_WBWA | TCR_EL2_ORGN0_WBWA; + el2_regs.tcr_el2 = TCR_EL2_RES1; + el2_regs.tcr_el2 |= min(pa_range_bits << TCR_EL2_PS_SHIFT, + TCR_EL2_PS_52BITS); + el2_regs.tcr_el2 |= TCR_EL2_T0SZ(64 - EL2_VIRT_BITS); + el2_regs.tcr_el2 |= TCR_EL2_IRGN0_WBWA | TCR_EL2_ORGN0_WBWA; #if PAGE_SIZE == PAGE_SIZE_4K - el2_regs.tcr_el2 |= TCR_EL2_TG0_4K; + el2_regs.tcr_el2 |= TCR_EL2_TG0_4K; #elif PAGE_SIZE == PAGE_SIZE_16K - el2_regs.tcr_el2 |= TCR_EL2_TG0_16K; + el2_regs.tcr_el2 |= TCR_EL2_TG0_16K; #else #error Unsupported page size #endif #ifdef SMP - el2_regs.tcr_el2 |= TCR_EL2_SH0_IS; + el2_regs.tcr_el2 |= TCR_EL2_SH0_IS; #endif + } - switch (el2_regs.tcr_el2 & TCR_EL2_PS_MASK) { + switch (pa_range_bits << TCR_EL2_PS_SHIFT) { case TCR_EL2_PS_32BITS: vmm_max_ipa_bits = 32; break; @@ -396,36 +405,37 @@ vmmops_modinit(int ipinum) smp_rendezvous(NULL, arm_setup_vectors, NULL, &el2_regs); - /* Add memory to the vmem allocator (checking there is space) */ - if (vmm_base > (L2_SIZE + PAGE_SIZE)) { - /* - * Ensure there is an L2 block before the vmm code to check - * for buffer overflows on earlier data. Include the PAGE_SIZE - * of the minimum we can allocate. - */ - vmm_base -= L2_SIZE + PAGE_SIZE; - vmm_base = rounddown2(vmm_base, L2_SIZE); + if (!in_vhe()) { + /* Add memory to the vmem allocator (checking there is space) */ + if (vmm_base > (L2_SIZE + PAGE_SIZE)) { + /* + * Ensure there is an L2 block before the vmm code to check + * for buffer overflows on earlier data. Include the PAGE_SIZE + * of the minimum we can allocate. + */ + vmm_base -= L2_SIZE + PAGE_SIZE; + vmm_base = rounddown2(vmm_base, L2_SIZE); + + /* + * Check there is memory before the vmm code to add. + * + * Reserve the L2 block at address 0 so NULL dereference will + * raise an exception. + */ + if (vmm_base > L2_SIZE) + vmem_add(el2_mem_alloc, L2_SIZE, vmm_base - L2_SIZE, + M_WAITOK); + } /* - * Check there is memory before the vmm code to add. - * - * Reserve the L2 block at address 0 so NULL dereference will - * raise an exception. + * Add the memory after the stacks. There is most of an L2 block + * between the last stack and the first allocation so this should + * be safe without adding more padding. */ - if (vmm_base > L2_SIZE) - vmem_add(el2_mem_alloc, L2_SIZE, vmm_base - L2_SIZE, - M_WAITOK); + if (next_hyp_va < HYP_VM_MAX_ADDRESS - PAGE_SIZE) + vmem_add(el2_mem_alloc, next_hyp_va, + HYP_VM_MAX_ADDRESS - next_hyp_va, M_WAITOK); } - - /* - * Add the memory after the stacks. There is most of an L2 block - * between the last stack and the first allocation so this should - * be safe without adding more padding. - */ - if (next_hyp_va < HYP_VM_MAX_ADDRESS - PAGE_SIZE) - vmem_add(el2_mem_alloc, next_hyp_va, - HYP_VM_MAX_ADDRESS - next_hyp_va, M_WAITOK); - cnthctl_el2 = vmm_read_reg(HYP_REG_CNTHCTL); vgic_init(); @@ -439,21 +449,25 @@ vmmops_modcleanup(void) { int cpu; - smp_rendezvous(NULL, arm_teardown_vectors, NULL, NULL); + if (!in_vhe()) { + smp_rendezvous(NULL, arm_teardown_vectors, NULL, NULL); - CPU_FOREACH(cpu) { - vmmpmap_remove(stack_hyp_va[cpu], VMM_STACK_PAGES * PAGE_SIZE, - false); - } + CPU_FOREACH(cpu) { + vmmpmap_remove(stack_hyp_va[cpu], + VMM_STACK_PAGES * PAGE_SIZE, false); + } - vmmpmap_remove(hyp_code_base, hyp_code_len, false); + vmmpmap_remove(hyp_code_base, hyp_code_len, false); + } vtimer_cleanup(); - vmmpmap_fini(); + if (!in_vhe()) { + vmmpmap_fini(); - CPU_FOREACH(cpu) - free(stack[cpu], M_HYP); + CPU_FOREACH(cpu) + free(stack[cpu], M_HYP); + } pmap_clean_stage2_tlbi = NULL; pmap_stage2_invalidate_range = NULL; @@ -505,8 +519,9 @@ vmmops_init(struct vm *vm, pmap_t pmap) vtimer_vminit(hyp); vgic_vminit(hyp); - hyp->el2_addr = el2_map_enter((vm_offset_t)hyp, size, - VM_PROT_READ | VM_PROT_WRITE); + if (!in_vhe()) + hyp->el2_addr = el2_map_enter((vm_offset_t)hyp, size, + VM_PROT_READ | VM_PROT_WRITE); return (hyp); } @@ -534,8 +549,9 @@ vmmops_vcpu_init(void *vmi, struct vcpu *vcpu1, int vcpuid) vtimer_cpuinit(hypctx); vgic_cpuinit(hypctx); - hypctx->el2_addr = el2_map_enter((vm_offset_t)hypctx, size, - VM_PROT_READ | VM_PROT_WRITE); + if (!in_vhe()) + hypctx->el2_addr = el2_map_enter((vm_offset_t)hypctx, size, + VM_PROT_READ | VM_PROT_WRITE); return (hypctx); } @@ -1124,9 +1140,7 @@ vmmops_run(void *vcpui, register_t pc, pmap_t pmap, struct vm_eventinfo *evinfo) vtimer_sync_hwstate(hypctx); /* - * Deactivate the stage2 pmap. vmm_pmap_clean_stage2_tlbi - * depends on this meaning we activate the VM before entering - * the vm again + * Deactivate the stage2 pmap. */ PCPU_SET(curvmpmap, NULL); intr_restore(daif); @@ -1179,7 +1193,8 @@ vmmops_vcpu_cleanup(void *vcpui) vtimer_cpucleanup(hypctx); vgic_cpucleanup(hypctx); - vmmpmap_remove(hypctx->el2_addr, el2_hypctx_size(), true); + if (!in_vhe()) + vmmpmap_remove(hypctx->el2_addr, el2_hypctx_size(), true); free(hypctx, M_HYP); } @@ -1194,7 +1209,8 @@ vmmops_cleanup(void *vmi) smp_rendezvous(NULL, arm_pcpu_vmcleanup, NULL, hyp); - vmmpmap_remove(hyp->el2_addr, el2_hyp_size(hyp->vm), true); + if (!in_vhe()) + vmmpmap_remove(hyp->el2_addr, el2_hyp_size(hyp->vm), true); free(hyp, M_HYP); } diff --git a/sys/arm64/vmm/vmm_reset.c b/sys/arm64/vmm/vmm_reset.c index a929a60c9474..3195bc10dedd 100644 --- a/sys/arm64/vmm/vmm_reset.c +++ b/sys/arm64/vmm/vmm_reset.c @@ -136,6 +136,9 @@ reset_vm_el2_regs(void *vcpu) */ el2ctx->hcr_el2 = HCR_RW | HCR_TID3 | HCR_TWI | HCR_BSU_IS | HCR_FB | HCR_AMO | HCR_IMO | HCR_FMO | HCR_SWIO | HCR_VM; + if (in_vhe()) { + el2ctx->hcr_el2 |= HCR_E2H; + } /* TODO: Trap all extensions we don't support */ el2ctx->mdcr_el2 = 0; @@ -166,7 +169,10 @@ reset_vm_el2_regs(void *vcpu) * Don't trap accesses to CPACR_EL1, trace, SVE, Advanced SIMD * and floating point functionality to EL2. */ - el2ctx->cptr_el2 = CPTR_RES1; + if (in_vhe()) + el2ctx->cptr_el2 = CPACR_FPEN_TRAP_NONE; + else + el2ctx->cptr_el2 = CPTR_RES1; /* * Disable interrupts in the guest. The guest OS will re-enable * them.