Date: Thu, 17 Jun 2010 21:08:40 +0530 From: "Jayachandran C." <c.jayachandran@gmail.com> To: Randall Stewart <rrs@lakerest.net>, Juli Mallett <jmallett@freebsd.org>, "M. Warner Losh" <imp@bsdimp.com>, freebsd-mips@freebsd.org Subject: Merging 64 bit changes to -HEAD - part 2 Message-ID: <AANLkTimmIBz1sfRx8I8E0OhNQz8wEtBihr1YZ26QfYL0@mail.gmail.com>
index | next in thread | raw e-mail
[-- Attachment #1 --] On Tue, Jun 15, 2010 at 7:06 PM, Jayachandran C. <c.jayachandran@gmail.com> wrote: > I have volunteered to merge Juli's 64-bit work into HEAD, and > hopefully get it to work on XLR too. The tree > (http://svn.freebsd.org/base/user/jmallett/octeon) has quite a bit of > changes, so I would like to do this over multiple changesets and > without breaking the current o32 code. Here's part 2, containing two patches: pmap-PTE-to-PG.patch : This is a renaming patch with minor cleanup, The PTE_* flags are renamed to PG_ and related changes are made to other files. I have tried to make this patch limited to just the renaming and the changes related to it. I will make another patch for the rest of the minor changes in pmap.c. My comment on this patch: The name PG_C_CNC for value 3 in pte.h may be confusing, at least on XLR. We don't have cached non-coherent mode, the cached memory is coherent(except L1 I-cache), so I would prefer PG_CACHED and PG_UNCACHED names. pmap-lgmem-lock-remove.patch : Remove the lock in local_sysmaps, and sched_pin()/unpin() in the PMAP_LMEM_ macros. The 64-bit support changes would be next - comments on the patches esp. the first one is welcome. Thanks, JC. [-- Attachment #2 --] Index: sys/mips/include/vm.h =================================================================== --- sys/mips/include/vm.h (revision 209243) +++ sys/mips/include/vm.h (working copy) @@ -32,8 +32,8 @@ #include <machine/pte.h> /* Memory attributes. */ -#define VM_MEMATTR_UNCACHED ((vm_memattr_t)PTE_UNCACHED) -#define VM_MEMATTR_CACHEABLE_NONCOHERENT ((vm_memattr_t)PTE_CACHE) +#define VM_MEMATTR_UNCACHED ((vm_memattr_t)PG_C_UC) +#define VM_MEMATTR_CACHEABLE_NONCOHERENT ((vm_memattr_t)PG_C_CNC) #define VM_MEMATTR_DEFAULT VM_MEMATTR_CACHEABLE_NONCOHERENT Index: sys/mips/include/pte.h =================================================================== --- sys/mips/include/pte.h (revision 209243) +++ sys/mips/include/pte.h (working copy) @@ -63,7 +63,7 @@ #define TLBLO_PFN_TO_PA(pfn) ((vm_paddr_t)((pfn) >> TLBLO_PFN_SHIFT) << TLB_PAGE_SHIFT) #define TLBLO_PTE_TO_PFN(pte) ((pte) & TLBLO_PFN_MASK) #define TLBLO_PTE_TO_PA(pte) (TLBLO_PFN_TO_PA(TLBLO_PTE_TO_PFN((pte)))) - + /* * VPN for EntryHi register. Upper two bits select user, supervisor, * or kernel. Bits 61 to 40 copy bit 63. VPN2 is bits 39 and down to @@ -76,54 +76,49 @@ #define TLBHI_ENTRY(va, asid) (((va) & ~PAGE_MASK) | ((asid) & TLBHI_ASID_MASK)) #ifndef _LOCORE -typedef unsigned int pt_entry_t; -typedef pt_entry_t *pd_entry_t; +typedef uint32_t pt_entry_t; +typedef pt_entry_t *pd_entry_t; #endif + #define PDESIZE sizeof(pd_entry_t) /* for assembly files */ #define PTESIZE sizeof(pt_entry_t) /* for assembly files */ -#define PT_ENTRY_NULL ((pt_entry_t *) 0) +/* + * TLB flags managed in hardware: + * C: Cache attribute. + * D: Dirty bit. This means a page is writable. It is not + * set at first, and a write is trapped, and the dirty + * bit is set. See also PG_RO. + * V: Valid bit. Obvious, isn't it? + * G: Global bit. This means that this mapping is present + * in EVERY address space, and to ignore the ASID when + * it is matched. + */ +#define PG_C(attr) ((attr & 0x07) << 3) +#define PG_C_UC (PG_C(0x02)) +#define PG_C_CNC (PG_C(0x03)) +#define PG_D 0x04 +#define PG_V 0x02 +#define PG_G 0x01 -#define PTE_WIRED 0x80000000 /* SW */ -#define PTE_W PTE_WIRED -#define PTE_RO 0x40000000 /* SW */ +/* + * VM flags managed in software: + * RO: Read only. Never set PG_D on this page, and don't + * listen to requests to write to it. + * W: Wired. ??? + */ +#define PG_RO (0x01 << TLBLO_SWBITS_SHIFT) +#define PG_W (0x02 << TLBLO_SWBITS_SHIFT) -#define PTE_G 0x00000001 /* HW */ -#define PTE_V 0x00000002 -/*#define PTE_NV 0x00000000 Not Used */ -#define PTE_M 0x00000004 -#define PTE_RW PTE_M -#define PTE_ODDPG 0x00001000 -/*#define PG_ATTR 0x0000003f Not Used */ -#define PTE_UNCACHED 0x00000010 -#ifdef CPU_SB1 -#define PTE_CACHE 0x00000028 /* cacheable coherent */ -#else -#define PTE_CACHE 0x00000018 -#endif -/*#define PG_CACHEMODE 0x00000038 Not Used*/ -#define PTE_ROPAGE (PTE_V | PTE_RO | PTE_CACHE) /* Write protected */ -#define PTE_RWPAGE (PTE_V | PTE_M | PTE_CACHE) /* Not wr-prot not clean */ -#define PTE_CWPAGE (PTE_V | PTE_CACHE) /* Not wr-prot but clean */ -#define PTE_IOPAGE (PTE_G | PTE_V | PTE_M | PTE_UNCACHED) -#define PTE_FRAME 0x3fffffc0 -#define PTE_HVPN 0xffffe000 /* Hardware page no mask */ -#define PTE_ASID 0x000000ff /* Address space ID */ +/* + * PTE management functions for bits defined above. + * + * XXX Can make these atomics, but some users of them are using PTEs in local + * registers and such and don't need the overhead. + */ +#define pte_clear(pte, bit) ((*pte) &= ~(bit)) +#define pte_set(pte, bit) ((*pte) |= (bit)) +#define pte_test(pte, bit) (((*pte) & (bit)) == (bit)) - -/* User virtual to pte offset in page table */ -#define vad_to_pte_offset(adr) (((adr) >> PAGE_SHIFT) & (NPTEPG -1)) - -#define mips_pg_v(entry) ((entry) & PTE_V) -#define mips_pg_wired(entry) ((entry) & PTE_WIRED) -#define mips_pg_m_bit() (PTE_M) -#define mips_pg_rw_bit() (PTE_M) -#define mips_pg_ro_bit() (PTE_RO) -#define mips_pg_ropage_bit() (PTE_ROPAGE) -#define mips_pg_rwpage_bit() (PTE_RWPAGE) -#define mips_pg_cwpage_bit() (PTE_CWPAGE) -#define mips_pg_global_bit() (PTE_G) -#define mips_pg_wired_bit() (PTE_WIRED) - #endif /* !_MACHINE_PTE_H_ */ Index: sys/mips/mips/vm_machdep.c =================================================================== --- sys/mips/mips/vm_machdep.c (revision 209243) +++ sys/mips/mips/vm_machdep.c (working copy) @@ -219,7 +219,7 @@ */ for (i = 0; i < KSTACK_PAGES; i++) { pte = pmap_pte(kernel_pmap, td->td_kstack + i * PAGE_SIZE); - td->td_md.md_upte[i] = *pte & ~(PTE_RO|PTE_WIRED); + td->td_md.md_upte[i] = *pte & ~TLBLO_SWBITS_MASK; } } @@ -241,7 +241,7 @@ for (i = 0; i < KSTACK_PAGES; i++) { pte = pmap_pte(kernel_pmap, td->td_kstack + i * PAGE_SIZE); - td->td_md.md_upte[i] = *pte & ~(PTE_RO|PTE_WIRED); + td->td_md.md_upte[i] = *pte & ~TLBLO_SWBITS_MASK; } } Index: sys/mips/mips/exception.S =================================================================== --- sys/mips/mips/exception.S (revision 209243) +++ sys/mips/mips/exception.S (working copy) @@ -815,7 +815,7 @@ lw k0, 0(k1) # k0=this PTE /* Validate page table entry. */ - andi k0, PTE_V + andi k0, PG_V beqz k0, 3f nop Index: sys/mips/mips/pmap.c =================================================================== --- sys/mips/mips/pmap.c (revision 209243) +++ sys/mips/mips/pmap.c (working copy) @@ -68,7 +68,6 @@ #include <sys/cdefs.h> __FBSDID("$FreeBSD$"); -#include "opt_ddb.h" #include "opt_msgbuf.h" #include <sys/param.h> #include <sys/systm.h> @@ -120,22 +119,13 @@ /* * Get PDEs and PTEs for user/kernel address space */ -#define pmap_pde(m, v) (&((m)->pm_segtab[(vm_offset_t)(v) >> SEGSHIFT])) +#define pmap_pde(m, v) (&((m)->pm_segtab[(vm_offset_t)(v) >> SEGSHIFT])) #define segtab_pde(m, v) (m[(vm_offset_t)(v) >> SEGSHIFT]) -#define pmap_pte_w(pte) ((*(int *)pte & PTE_W) != 0) -#define pmap_pde_v(pte) ((*(int *)pte) != 0) -#define pmap_pte_m(pte) ((*(int *)pte & PTE_M) != 0) -#define pmap_pte_v(pte) ((*(int *)pte & PTE_V) != 0) - -#define pmap_pte_set_w(pte, v) ((v)?(*(int *)pte |= PTE_W):(*(int *)pte &= ~PTE_W)) -#define pmap_pte_set_prot(pte, v) ((*(int *)pte &= ~PG_PROT), (*(int *)pte |= (v))) - #define MIPS_SEGSIZE (1L << SEGSHIFT) #define mips_segtrunc(va) ((va) & ~(MIPS_SEGSIZE-1)) -#define pmap_TLB_invalidate_all() MIPS_TBIAP() -#define pmap_va_asid(pmap, va) ((va) | ((pmap)->pm_asid[PCPU_GET(cpuid)].asid << VMTLB_PID_SHIFT)) #define is_kernel_pmap(x) ((x) == kernel_pmap) +#define vad_to_pte_offset(adr) (((adr) >> PAGE_SHIFT) & (NPTEPG -1)) struct pmap kernel_pmap_store; pd_entry_t *kernel_segmap; @@ -172,9 +162,10 @@ static int pmap_remove_pte(struct pmap *pmap, pt_entry_t *ptq, vm_offset_t va); static void pmap_remove_page(struct pmap *pmap, vm_offset_t va); static void pmap_remove_entry(struct pmap *pmap, vm_page_t m, vm_offset_t va); -static boolean_t pmap_testbit(vm_page_t m, int bit); static boolean_t pmap_try_insert_pv_entry(pmap_t pmap, vm_page_t mpte, vm_offset_t va, vm_page_t m); +static __inline void +pmap_invalidate_page(pmap_t pmap, vm_offset_t va); static vm_page_t pmap_allocpte(pmap_t pmap, vm_offset_t va, int flags); @@ -221,10 +212,10 @@ sched_pin(); \ va = sysm->base; \ npte = TLBLO_PA_TO_PFN(phys) | \ - PTE_RW | PTE_V | PTE_G | PTE_W | PTE_CACHE; \ + PG_D | PG_V | PG_G | PG_W | PG_C_CNC; \ pte = pmap_pte(kernel_pmap, va); \ *pte = npte; \ - sysm->valid1 = 1; + sysm->valid1 = 1 #define PMAP_LMEM_MAP2(va1, phys1, va2, phys2) \ int cpu; \ @@ -239,28 +230,28 @@ va1 = sysm->base; \ va2 = sysm->base + PAGE_SIZE; \ npte = TLBLO_PA_TO_PFN(phys1) | \ - PTE_RW | PTE_V | PTE_G | PTE_W | PTE_CACHE; \ + PG_D | PG_V | PG_G | PG_W | PG_C_CNC; \ pte = pmap_pte(kernel_pmap, va1); \ *pte = npte; \ npte = TLBLO_PA_TO_PFN(phys2) | \ - PTE_RW | PTE_V | PTE_G | PTE_W | PTE_CACHE; \ + PG_D | PG_V | PG_G | PG_W | PG_C_CNC; \ pte = pmap_pte(kernel_pmap, va2); \ *pte = npte; \ sysm->valid1 = 1; \ - sysm->valid2 = 1; + sysm->valid2 = 1 #define PMAP_LMEM_UNMAP() \ pte = pmap_pte(kernel_pmap, sysm->base); \ - *pte = PTE_G; \ + *pte = PG_G; \ tlb_invalidate_address(kernel_pmap, sysm->base); \ sysm->valid1 = 0; \ pte = pmap_pte(kernel_pmap, sysm->base + PAGE_SIZE); \ - *pte = PTE_G; \ + *pte = PG_G; \ tlb_invalidate_address(kernel_pmap, sysm->base + PAGE_SIZE); \ sysm->valid2 = 0; \ sched_unpin(); \ intr_restore(intr); \ - PMAP_LGMEM_UNLOCK(sysm); + PMAP_LGMEM_UNLOCK(sysm) pd_entry_t pmap_segmap(pmap_t pmap, vm_offset_t va) @@ -475,7 +466,7 @@ * in the tlb. */ for (i = 0, pte = pgtab; i < (nkpt * NPTEPG); i++, pte++) - *pte = PTE_G; + *pte = PG_G; /* * The segment table contains the KVA of the pages in the second @@ -551,7 +542,7 @@ static int pmap_nw_modified(pt_entry_t pte) { - if ((pte & (PTE_M | PTE_RO)) == (PTE_M | PTE_RO)) + if ((pte & (PG_D | PG_RO)) == (PG_D | PG_RO)) return (1); else return (0); @@ -702,8 +693,8 @@ PMAP_LOCK(pmap); retry: pte = *pmap_pte(pmap, va); - if (pte != 0 && pmap_pte_v(&pte) && - ((pte & PTE_RW) || (prot & VM_PROT_WRITE) == 0)) { + if (pte != 0 && pte_test(&pte, PG_V) && + (pte_test(&pte, PG_D) || (prot & VM_PROT_WRITE) == 0)) { if (vm_page_pa_tryrelock(pmap, TLBLO_PTE_TO_PA(pte), &pa)) goto retry; @@ -725,18 +716,18 @@ /* PMAP_INLINE */ void pmap_kenter(vm_offset_t va, vm_paddr_t pa) { - register pt_entry_t *pte; - pt_entry_t npte, opte; + pt_entry_t *pte; + pt_entry_t opte, npte; #ifdef PMAP_DEBUG - printf("pmap_kenter: va: 0x%08x -> pa: 0x%08x\n", va, pa); + printf("pmap_kenter: va: %p -> pa: %p\n", (void *)va, (void *)pa); #endif - npte = TLBLO_PA_TO_PFN(pa) | PTE_RW | PTE_V | PTE_G | PTE_W; + npte = TLBLO_PA_TO_PFN(pa) | PG_D | PG_V | PG_G | PG_W; if (is_cacheable_mem(pa)) - npte |= PTE_CACHE; + npte |= PG_C_CNC; else - npte |= PTE_UNCACHED; + npte |= PG_C_UC; pte = pmap_pte(kernel_pmap, va); opte = *pte; @@ -751,7 +742,7 @@ /* PMAP_INLINE */ void pmap_kremove(vm_offset_t va) { - register pt_entry_t *pte; + pt_entry_t *pte; /* * Write back all caches from the page being destroyed @@ -759,7 +750,7 @@ mips_dcache_wbinv_range_index(va, PAGE_SIZE); pte = pmap_pte(kernel_pmap, va); - *pte = PTE_G; + *pte = PG_G; pmap_invalidate_page(kernel_pmap, va); } @@ -1232,7 +1223,7 @@ * produce a global bit to store in the tlb. */ for (i = 0; i < NPTEPG; i++, pte++) - *pte = PTE_G; + *pte = PG_G; kernel_vm_end = (kernel_vm_end + PAGE_SIZE * NPTEPG) & ~(PAGE_SIZE * NPTEPG - 1); @@ -1312,12 +1303,12 @@ KASSERT(pte != NULL, ("pte")); oldpte = loadandclear((u_int *)pte); if (is_kernel_pmap(pmap)) - *pte = PTE_G; - KASSERT((oldpte & PTE_W) == 0, + *pte = PG_G; + KASSERT(!pte_test(&oldpte, PG_W), ("wired pte for unwired page")); if (m->md.pv_flags & PV_TABLE_REF) vm_page_flag_set(m, PG_REFERENCED); - if (oldpte & PTE_M) + if (pte_test(&oldpte, PG_D)) vm_page_dirty(m); pmap_invalidate_page(pmap, va); TAILQ_REMOVE(&pmap->pm_pvlist, pv, pv_plist); @@ -1455,9 +1446,9 @@ oldpte = loadandclear((u_int *)ptq); if (is_kernel_pmap(pmap)) - *ptq = PTE_G; + *ptq = PG_G; - if (oldpte & PTE_W) + if (pte_test(&oldpte, PG_W)) pmap->pm_stats.wired_count -= 1; pmap->pm_stats.resident_count -= 1; @@ -1465,7 +1456,7 @@ if (page_is_managed(pa)) { m = PHYS_TO_VM_PAGE(pa); - if (oldpte & PTE_M) { + if (pte_test(&oldpte, PG_D)) { #if defined(PMAP_DIAGNOSTIC) if (pmap_nw_modified(oldpte)) { printf( @@ -1490,7 +1481,7 @@ static void pmap_remove_page(struct pmap *pmap, vm_offset_t va) { - register pt_entry_t *ptq; + pt_entry_t *ptq; mtx_assert(&vm_page_queue_mtx, MA_OWNED); PMAP_LOCK_ASSERT(pmap, MA_OWNED); @@ -1499,7 +1490,7 @@ /* * if there is no pte for this address, just skip it!!! */ - if (!ptq || !pmap_pte_v(ptq)) { + if (!ptq || !pte_test(ptq, PG_V)) { return; } @@ -1575,8 +1566,8 @@ void pmap_remove_all(vm_page_t m) { - register pv_entry_t pv; - register pt_entry_t *pte, tpte; + pv_entry_t pv; + pt_entry_t *pte, tpte; KASSERT((m->flags & PG_FICTITIOUS) == 0, ("pmap_remove_all: page %p is fictitious", m)); @@ -1601,15 +1592,15 @@ tpte = loadandclear((u_int *)pte); if (is_kernel_pmap(pv->pv_pmap)) - *pte = PTE_G; + *pte = PG_G; - if (tpte & PTE_W) + if (pte_test(&tpte, PG_W)) pv->pv_pmap->pm_stats.wired_count--; /* * Update the vm_page_t clean and reference bits. */ - if (tpte & PTE_M) { + if (pte_test(&tpte, PG_D)) { #if defined(PMAP_DIAGNOSTIC) if (pmap_nw_modified(tpte)) { printf( @@ -1671,7 +1662,7 @@ * If pte is invalid, skip this page */ pte = pmap_pte(pmap, sva); - if (!pmap_pte_v(pte)) { + if (!pte_test(pte, PG_V)) { sva += PAGE_SIZE; continue; } @@ -1679,12 +1670,13 @@ obits = pbits = *pte; pa = TLBLO_PTE_TO_PA(pbits); - if (page_is_managed(pa) && (pbits & PTE_M) != 0) { + if (page_is_managed(pa) && pte_test(&pbits, PG_D)) { m = PHYS_TO_VM_PAGE(pa); vm_page_dirty(m); m->md.pv_flags &= ~PV_TABLE_MOD; } - pbits = (pbits & ~PTE_M) | PTE_RO; + pte_clear(&pbits, PG_D); + pte_set(&pbits, PG_RO); if (pbits != *pte) { if (!atomic_cmpset_int((u_int *)pte, obits, pbits)) @@ -1714,7 +1706,7 @@ vm_prot_t prot, boolean_t wired) { vm_offset_t pa, opa; - register pt_entry_t *pte; + pt_entry_t *pte; pt_entry_t origpte, newpte; pv_entry_t pv; vm_page_t mpte, om; @@ -1758,16 +1750,16 @@ /* * Mapping has not changed, must be protection or wiring change. */ - if ((origpte & PTE_V) && (opa == pa)) { + if (pte_test(&origpte, PG_V) && opa == pa) { /* * Wiring change, just update stats. We don't worry about * wiring PT pages as they remain resident as long as there * are valid mappings in them. Hence, if a user page is * wired, the PT page will be also. */ - if (wired && ((origpte & PTE_W) == 0)) + if (wired && !pte_test(&origpte, PG_W)) pmap->pm_stats.wired_count++; - else if (!wired && (origpte & PTE_W)) + else if (!wired && pte_test(&origpte, PG_W)) pmap->pm_stats.wired_count--; #if defined(PMAP_DIAGNOSTIC) @@ -1797,7 +1789,7 @@ * handle validating new mapping. */ if (opa) { - if (origpte & PTE_W) + if (pte_test(&origpte, PG_W)) pmap->pm_stats.wired_count--; if (page_is_managed(opa)) { @@ -1845,31 +1837,30 @@ rw = init_pte_prot(va, m, prot); #ifdef PMAP_DEBUG - printf("pmap_enter: va: 0x%08x -> pa: 0x%08x\n", va, pa); + printf("pmap_enter: va: %p -> pa: %p\n", (void *)va, (void *)pa); #endif /* * Now validate mapping with desired protection/wiring. */ - newpte = TLBLO_PA_TO_PFN(pa) | rw | PTE_V; + newpte = TLBLO_PA_TO_PFN(pa) | rw | PG_V; if (is_cacheable_mem(pa)) - newpte |= PTE_CACHE; + newpte |= PG_C_CNC; else - newpte |= PTE_UNCACHED; + newpte |= PG_C_UC; if (wired) - newpte |= PTE_W; + newpte |= PG_W; - if (is_kernel_pmap(pmap)) { - newpte |= PTE_G; - } + if (is_kernel_pmap(pmap)) + newpte |= PG_G; /* * if the mapping or permission bits are different, we need to * update the pte. */ if (origpte != newpte) { - if (origpte & PTE_V) { + if (pte_test(&origpte, PG_V)) { *pte = newpte; if (page_is_managed(opa) && (opa != pa)) { if (om->md.pv_flags & PV_TABLE_REF) @@ -1877,8 +1868,8 @@ om->md.pv_flags &= ~(PV_TABLE_REF | PV_TABLE_MOD); } - if (origpte & PTE_M) { - KASSERT((origpte & PTE_RW), + if (pte_test(&origpte, PG_D)) { + KASSERT(!pte_test(&origpte, PG_RO), ("pmap_enter: modified page not writable:" " va: %p, pte: 0x%x", (void *)va, origpte)); if (page_is_managed(opa)) @@ -1986,7 +1977,7 @@ } pte = pmap_pte(pmap, va); - if (pmap_pte_v(pte)) { + if (pte_test(pte, PG_V)) { if (mpte != NULL) { mpte->wire_count--; mpte = NULL; @@ -2016,17 +2007,17 @@ /* * Now validate mapping with RO protection */ - *pte = TLBLO_PA_TO_PFN(pa) | PTE_V; + *pte = TLBLO_PA_TO_PFN(pa) | PG_V; if (is_cacheable_mem(pa)) - *pte |= PTE_CACHE; + *pte |= PG_C_CNC; else - *pte |= PTE_UNCACHED; + *pte |= PG_C_UC; if (is_kernel_pmap(pmap)) - *pte |= PTE_G; + *pte |= PG_G; else { - *pte |= PTE_RO; + *pte |= PG_RO; /* * Sync I & D caches. Do this only if the the target pmap * belongs to the current process. Otherwise, an @@ -2069,7 +2060,7 @@ cpu = PCPU_GET(cpuid); sysm = &sysmap_lmem[cpu]; /* Since this is for the debugger, no locks or any other fun */ - npte = TLBLO_PA_TO_PFN(pa) | PTE_RW | PTE_V | PTE_G | PTE_W | PTE_CACHE; + npte = TLBLO_PA_TO_PFN(pa) | PG_D | PG_V | PG_G | PG_W | PG_C_CNC; pte = pmap_pte(kernel_pmap, sysm->base); *pte = npte; sysm->valid1 = 1; @@ -2098,7 +2089,7 @@ intr = intr_disable(); pte = pmap_pte(kernel_pmap, sysm->base); - *pte = PTE_G; + *pte = PG_G; pmap_invalidate_page(kernel_pmap, sysm->base); intr_restore(intr); sysm->valid1 = 0; @@ -2168,7 +2159,7 @@ void pmap_change_wiring(pmap_t pmap, vm_offset_t va, boolean_t wired) { - register pt_entry_t *pte; + pt_entry_t *pte; if (pmap == NULL) return; @@ -2176,16 +2167,19 @@ PMAP_LOCK(pmap); pte = pmap_pte(pmap, va); - if (wired && !pmap_pte_w(pte)) + if (wired && !pte_test(pte, PG_W)) pmap->pm_stats.wired_count++; - else if (!wired && pmap_pte_w(pte)) + else if (!wired && pte_test(pte, PG_W)) pmap->pm_stats.wired_count--; /* * Wiring is not a hardware characteristic so there is no need to * invalidate TLB. */ - pmap_pte_set_w(pte, wired); + if (wired) + pte_set(pte, PG_W); + else + pte_clear(pte, PG_W); PMAP_UNLOCK(pmap); } @@ -2371,18 +2365,18 @@ for (pv = TAILQ_FIRST(&pmap->pm_pvlist); pv; pv = npv) { pte = pmap_pte(pv->pv_pmap, pv->pv_va); - if (!pmap_pte_v(pte)) + if (!pte_test(pte, PG_V)) panic("pmap_remove_pages: page on pm_pvlist has no pte\n"); tpte = *pte; /* * We cannot remove wired pages from a process' mapping at this time */ - if (tpte & PTE_W) { + if (pte_test(&tpte, PG_W)) { npv = TAILQ_NEXT(pv, pv_plist); continue; } - *pte = is_kernel_pmap(pmap) ? PTE_G : 0; + *pte = is_kernel_pmap(pmap) ? PG_G : 0; m = PHYS_TO_VM_PAGE(TLBLO_PTE_TO_PA(tpte)); KASSERT(m != NULL, @@ -2393,7 +2387,7 @@ /* * Update the vm_page_t clean and reference bits. */ - if (tpte & PTE_M) { + if (pte_test(&tpte, PG_D)) { vm_page_dirty(m); } npv = TAILQ_NEXT(pv, pv_plist); @@ -2441,7 +2435,7 @@ #endif PMAP_LOCK(pv->pv_pmap); pte = pmap_pte(pv->pv_pmap, pv->pv_va); - rv = (*pte & bit) != 0; + rv = pte_test(pte, bit); PMAP_UNLOCK(pv->pv_pmap); if (rv) break; @@ -2450,13 +2444,13 @@ } /* - * this routine is used to modify bits in ptes + * this routine is used to clear dirty bits in ptes */ static __inline void pmap_changebit(vm_page_t m, int bit, boolean_t setem) { - register pv_entry_t pv; - register pt_entry_t *pte; + pv_entry_t pv; + pt_entry_t *pte; if (m->flags & PG_FICTITIOUS) return; @@ -2484,12 +2478,11 @@ vm_offset_t pbits = *(vm_offset_t *)pte; if (pbits & bit) { - if (bit == PTE_RW) { - if (pbits & PTE_M) { + if (bit == PG_D) { + if (pbits & PG_D) { vm_page_dirty(m); } - *(int *)pte = (pbits & ~(PTE_M | PTE_RW)) | - PTE_RO; + *(int *)pte = (pbits & ~PG_D) | PG_RO; } else { *(int *)pte = pbits & ~bit; } @@ -2498,7 +2491,7 @@ } PMAP_UNLOCK(pv->pv_pmap); } - if (!setem && bit == PTE_RW) + if (!setem && bit == PG_D) vm_page_flag_clear(m, PG_WRITEABLE); } @@ -2555,8 +2548,7 @@ for (pv = TAILQ_FIRST(&m->md.pv_list); pv; pv = npv) { npv = TAILQ_NEXT(pv, pv_plist); pte = pmap_pte(pv->pv_pmap, pv->pv_va); - - if ((pte == NULL) || !mips_pg_v(*pte)) + if (pte == NULL || !pte_test(pte, PG_V)) panic("page on pm_pvlist has no pte\n"); va = pv->pv_va; @@ -2604,7 +2596,7 @@ /* * If the page is not VPO_BUSY, then PG_WRITEABLE cannot be * concurrently set while the object is locked. Thus, if PG_WRITEABLE - * is clear, no PTEs can have PTE_M set. + * is clear, no PTEs can have PG_D set. */ VM_OBJECT_LOCK_ASSERT(m->object, MA_OWNED); if ((m->oflags & VPO_BUSY) == 0 && @@ -2614,7 +2606,7 @@ if (m->md.pv_flags & PV_TABLE_MOD) rv = TRUE; else - rv = pmap_testbit(m, PTE_M); + rv = pmap_testbit(m, PG_D); vm_page_unlock_queues(); return (rv); } @@ -2657,7 +2649,7 @@ ("pmap_clear_modify: page %p is busy", m)); /* - * If the page is not PG_WRITEABLE, then no PTEs can have PTE_M set. + * If the page is not PG_WRITEABLE, then no PTEs can have PG_D set. * If the object containing the page is locked and the page is not * VPO_BUSY, then PG_WRITEABLE cannot be concurrently set. */ @@ -2665,7 +2657,7 @@ return; vm_page_lock_queues(); if (m->md.pv_flags & PV_TABLE_MOD) { - pmap_changebit(m, PTE_M, FALSE); + pmap_changebit(m, PG_D, FALSE); m->md.pv_flags &= ~PV_TABLE_MOD; } vm_page_unlock_queues(); @@ -2784,12 +2776,12 @@ retry: ptep = pmap_pte(pmap, addr); pte = (ptep != NULL) ? *ptep : 0; - if (!mips_pg_v(pte)) { + if (!pte_test(&pte, PG_V)) { val = 0; goto out; } val = MINCORE_INCORE; - if ((pte & PTE_M) != 0) + if (pte_test(&pte, PG_D)) val |= MINCORE_MODIFIED | MINCORE_MODIFIED_OTHER; pa = TLBLO_PTE_TO_PA(pte); managed = page_is_managed(pa); @@ -2915,13 +2907,13 @@ unsigned base = i << SEGSHIFT; pde = &pmap->pm_segtab[i]; - if (pde && pmap_pde_v(pde)) { + if (pde && *pde != 0) { for (j = 0; j < 1024; j++) { vm_offset_t va = base + (j << PAGE_SHIFT); pte = pmap_pte(pmap, va); - if (pte && pmap_pte_v(pte)) { + if (pte && pte_test(pte, PG_V)) { vm_offset_t pa; vm_page_t m; @@ -3058,16 +3050,16 @@ int rw; if (!(prot & VM_PROT_WRITE)) - rw = PTE_ROPAGE; + rw = PG_V | PG_RO | PG_C_CNC; /* ROPAGE */ else if ((m->flags & (PG_FICTITIOUS | PG_UNMANAGED)) == 0) { if ((m->md.pv_flags & PV_TABLE_MOD) != 0) - rw = PTE_RWPAGE; + rw = PG_V | PG_D | PG_C_CNC; /* RWPAGE */ else - rw = PTE_CWPAGE; + rw = PG_V | PG_C_CNC; /* CWPAGE */ vm_page_flag_set(m, PG_WRITEABLE); } else /* Needn't emulate a modified bit for unmanaged pages. */ - rw = PTE_RWPAGE; + rw = PG_V | PG_D | PG_C_CNC; /* RWPAGE */ return (rw); } Index: sys/mips/mips/machdep.c =================================================================== --- sys/mips/mips/machdep.c (revision 209243) +++ sys/mips/mips/machdep.c (working copy) @@ -421,7 +421,7 @@ * We use a wired tlb index to do this one-time mapping. */ pa = vtophys(pcpu); - pte = PTE_RW | PTE_V | PTE_G | PTE_CACHE; + pte = PG_D | PG_V | PG_G | PG_C_CNC; tlb_insert_wired(PCPU_TLB_ENTRY, (vm_offset_t)pcpup, TLBLO_PA_TO_PFN(pa) | pte, TLBLO_PA_TO_PFN(pa + PAGE_SIZE) | pte); Index: sys/mips/mips/trap.c =================================================================== --- sys/mips/mips/trap.c (revision 209243) +++ sys/mips/mips/trap.c (working copy) @@ -327,7 +327,7 @@ #ifdef SMP printf("cpuid = %d\n", PCPU_GET(cpuid)); #endif - MachTLBGetPID(pid); + pid = mips_rd_entryhi() & TLBHI_ASID_MASK; printf("badaddr = %#jx, pc = %#jx, ra = %#jx, sp = %#jx, sr = %jx, pid = %d, ASID = %u\n", (intmax_t)trapframe->badvaddr, (intmax_t)trapframe->pc, (intmax_t)trapframe->ra, (intmax_t)trapframe->sp, (intmax_t)trapframe->sr, @@ -378,23 +378,23 @@ panic("trap: ktlbmod: can't find PTE"); #ifdef SMP /* It is possible that some other CPU changed m-bit */ - if (!mips_pg_v(*pte) || (*pte & mips_pg_m_bit())) { + if (!pte_test(pte, PG_V) || pte_test(pte, PG_D)) { pmap_update_page(kernel_pmap, trapframe->badvaddr, *pte); PMAP_UNLOCK(kernel_pmap); return (trapframe->pc); } #else - if (!mips_pg_v(*pte) || (*pte & mips_pg_m_bit())) + if (!pte_test(pte, PG_V) || pte_test(pte, PG_D)) panic("trap: ktlbmod: invalid pte"); #endif - if (*pte & mips_pg_ro_bit()) { + if (pte_test(pte, PG_RO)) { /* write to read only page in the kernel */ ftype = VM_PROT_WRITE; PMAP_UNLOCK(kernel_pmap); goto kernel_fault; } - *pte |= mips_pg_m_bit(); + pte_set(pte, PG_D); pmap_update_page(kernel_pmap, trapframe->badvaddr, *pte); pa = TLBLO_PTE_TO_PA(*pte); if (!page_is_managed(pa)) @@ -417,23 +417,23 @@ panic("trap: utlbmod: can't find PTE"); #ifdef SMP /* It is possible that some other CPU changed m-bit */ - if (!mips_pg_v(*pte) || (*pte & mips_pg_m_bit())) { + if (!pte_test(pte, PG_V) || pte_test(pte, PG_D)) { pmap_update_page(pmap, trapframe->badvaddr, *pte); PMAP_UNLOCK(pmap); goto out; } #else - if (!mips_pg_v(*pte) || (*pte & mips_pg_m_bit())) + if (!pte_test(pte, PG_V) || pte_test(pte, PG_D)) panic("trap: utlbmod: invalid pte"); #endif - if (*pte & mips_pg_ro_bit()) { + if (pte_test(pte, PG_RO)) { /* write to read only page */ ftype = VM_PROT_WRITE; PMAP_UNLOCK(pmap); goto dofault; } - *pte |= mips_pg_m_bit(); + pte_set(pte, PG_D); pmap_update_page(pmap, trapframe->badvaddr, *pte); pa = TLBLO_PTE_TO_PA(*pte); if (!page_is_managed(pa)) [-- Attachment #3 --] Index: sys/mips/mips/pmap.c =================================================================== --- sys/mips/mips/pmap.c (revision 209243) +++ sys/mips/mips/pmap.c (working copy) @@ -195,7 +195,6 @@ static uma_zone_t ptpgzone; struct local_sysmaps { - struct mtx lock; vm_offset_t base; uint16_t valid1, valid2; }; @@ -214,11 +213,9 @@ struct local_sysmaps *sysm; \ pt_entry_t *pte, npte; \ \ + intr = intr_disable(); \ cpu = PCPU_GET(cpuid); \ sysm = &sysmap_lmem[cpu]; \ - PMAP_LGMEM_LOCK(sysm); \ - intr = intr_disable(); \ - sched_pin(); \ va = sysm->base; \ npte = TLBLO_PA_TO_PFN(phys) | \ PTE_RW | PTE_V | PTE_G | PTE_W | PTE_CACHE; \ @@ -231,11 +228,9 @@ struct local_sysmaps *sysm; \ pt_entry_t *pte, npte; \ \ + intr = intr_disable(); \ cpu = PCPU_GET(cpuid); \ sysm = &sysmap_lmem[cpu]; \ - PMAP_LGMEM_LOCK(sysm); \ - intr = intr_disable(); \ - sched_pin(); \ va1 = sysm->base; \ va2 = sysm->base + PAGE_SIZE; \ npte = TLBLO_PA_TO_PFN(phys1) | \ @@ -258,9 +253,7 @@ *pte = PTE_G; \ tlb_invalidate_address(kernel_pmap, sysm->base + PAGE_SIZE); \ sysm->valid2 = 0; \ - sched_unpin(); \ intr_restore(intr); \ - PMAP_LGMEM_UNLOCK(sysm) pd_entry_t pmap_segmap(pmap_t pmap, vm_offset_t va) @@ -436,7 +429,6 @@ sysmap_lmem[i].base = virtual_avail; virtual_avail += PAGE_SIZE * 2; sysmap_lmem[i].valid1 = sysmap_lmem[i].valid2 = 0; - PMAP_LGMEM_LOCK_INIT(&sysmap_lmem[i]); } } Index: sys/mips/include/pmap.h =================================================================== --- sys/mips/include/pmap.h (revision 209243) +++ sys/mips/include/pmap.h (working copy) @@ -116,12 +116,6 @@ #define PMAP_TRYLOCK(pmap) mtx_trylock(&(pmap)->pm_mtx) #define PMAP_UNLOCK(pmap) mtx_unlock(&(pmap)->pm_mtx) -#define PMAP_LGMEM_LOCK_INIT(sysmap) mtx_init(&(sysmap)->lock, "pmap-lgmem", \ - "per-cpu-map", (MTX_DEF| MTX_DUPOK)) -#define PMAP_LGMEM_LOCK(sysmap) mtx_lock(&(sysmap)->lock) -#define PMAP_LGMEM_UNLOCK(sysmap) mtx_unlock(&(sysmap)->lock) -#define PMAP_LGMEM_DESTROY(sysmap) mtx_destroy(&(sysmap)->lock) - /* * For each vm_page_t, there is a list of all currently valid virtual * mappings of that page. An entry is a pv_entry_t, the list is pv_table.home | help
Want to link to this message? Use this
URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?AANLkTimmIBz1sfRx8I8E0OhNQz8wEtBihr1YZ26QfYL0>
