Date: Wed, 6 May 2020 15:01:07 +0000 (UTC) From: Mark Johnston <markj@FreeBSD.org> To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r360690 - head/sys/arm64/arm64 Message-ID: <202005061501.046F17UK058616@repo.freebsd.org>
next in thread | raw e-mail | index | archive | help
Author: markj Date: Wed May 6 15:01:06 2020 New Revision: 360690 URL: https://svnweb.freebsd.org/changeset/base/360690 Log: Simplify arm64's pmap_bootstrap() a bit. locore constructs an L2 page mapping the kernel and preloaded data starting a KERNBASE (the same as VM_MIN_KERNEL_ADDRESS on arm64). initarm() and pmap_bootstrap() use the preloaded metadata to tell it where it can start allocating from. pmap_bootstrap() currently iterates over the L2 page to find the last valid entry, but doesn't do anything with the result. Remove the loop and zap some now-unused local variables. MFC after: 2 weeks Sponsored by: Juniper Networks, Klara Inc. Differential Revision: https://reviews.freebsd.org/D24559 Modified: head/sys/arm64/arm64/pmap.c Modified: head/sys/arm64/arm64/pmap.c ============================================================================== --- head/sys/arm64/arm64/pmap.c Wed May 6 11:40:32 2020 (r360689) +++ head/sys/arm64/arm64/pmap.c Wed May 6 15:01:06 2020 (r360690) @@ -831,9 +831,7 @@ void pmap_bootstrap(vm_offset_t l0pt, vm_offset_t l1pt, vm_paddr_t kernstart, vm_size_t kernlen) { - u_int l1_slot, l2_slot; - pt_entry_t *l2; - vm_offset_t va, freemempos; + vm_offset_t freemempos; vm_offset_t dpcpu, msgbufpv; vm_paddr_t start_pa, pa, min_pa; uint64_t kern_delta; @@ -867,7 +865,7 @@ pmap_bootstrap(vm_offset_t l0pt, vm_offset_t l1pt, vm_ * Find the minimum physical address. physmap is sorted, * but may contain empty ranges. */ - for (i = 0; i < (physmap_idx * 2); i += 2) { + for (i = 0; i < physmap_idx * 2; i += 2) { if (physmap[i] == physmap[i + 1]) continue; if (physmap[i] <= min_pa) @@ -880,38 +878,14 @@ pmap_bootstrap(vm_offset_t l0pt, vm_offset_t l1pt, vm_ /* Create a direct map region early so we can use it for pa -> va */ freemempos = pmap_bootstrap_dmap(l1pt, min_pa, freemempos); - va = KERNBASE; start_pa = pa = KERNBASE - kern_delta; /* - * Read the page table to find out what is already mapped. - * This assumes we have mapped a block of memory from KERNBASE - * using a single L1 entry. + * Create the l2 tables up to VM_MAX_KERNEL_ADDRESS. We assume that the + * loader allocated the first and only l2 page table page used to map + * the kernel, preloaded files and module metadata. */ - l2 = pmap_early_page_idx(l1pt, KERNBASE, &l1_slot, &l2_slot); - - /* Sanity check the index, KERNBASE should be the first VA */ - KASSERT(l2_slot == 0, ("The L2 index is non-zero")); - - /* Find how many pages we have mapped */ - for (; l2_slot < Ln_ENTRIES; l2_slot++) { - if ((l2[l2_slot] & ATTR_DESCR_MASK) == 0) - break; - - /* Check locore used L2 blocks */ - KASSERT((l2[l2_slot] & ATTR_DESCR_MASK) == L2_BLOCK, - ("Invalid bootstrap L2 table")); - KASSERT((l2[l2_slot] & ~ATTR_MASK) == pa, - ("Incorrect PA in L2 table")); - - va += L2_SIZE; - pa += L2_SIZE; - } - - va = roundup2(va, L1_SIZE); - - /* Create the l2 tables up to VM_MAX_KERNEL_ADDRESS */ - freemempos = pmap_bootstrap_l2(l1pt, va, freemempos); + freemempos = pmap_bootstrap_l2(l1pt, KERNBASE + L1_SIZE, freemempos); /* And the l3 tables for the early devmap */ freemempos = pmap_bootstrap_l3(l1pt, VM_MAX_KERNEL_ADDRESS - (PMAP_MAPDEV_EARLY_SIZE), freemempos);
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?202005061501.046F17UK058616>