Date: Tue, 16 Jun 2020 21:32:05 -0500 From: Justin Hibbits <chmeeedalf@gmail.com> To: Mark Millard <marklmi@yahoo.com> Cc: FreeBSD PowerPC ML <freebsd-ppc@freebsd.org>, Brandon Bergren <bdragon@FreeBSD.org> Subject: Re: svn commit: r360233 - in head: contrib/jemalloc . . . : This partially breaks a 2-socket 32-bit powerpc (old PowerMac G4) based on head -r360311 Message-ID: <20200616213205.05f365dd@titan.knownspace> In-Reply-To: <F27CB198-2169-4FB2-AA67-F8244C7D39C5@yahoo.com> References: <C24EE1A1-FAED-42C2-8204-CA7B1D20A369@yahoo.com> <18E62746-80DB-4195-977D-4FF32D0129EE@yahoo.com> <F5953A6B-56CE-4D1C-8C18-58D44B639881@yahoo.com> <D0C483E5-3F6A-4816-A6BA-3D2C82C24F8E@yahoo.com> <C440956F-139E-4EF7-A68E-FE35D9934BD3@yahoo.com> <9562EEE4-62EF-4164-91C0-948CC0432984@yahoo.com> <9B68839B-AEC8-43EE-B3B6-B696A4A57DAE@yahoo.com> <359C9C7D-4106-42B5-AAB5-08EF995B8100@yahoo.com> <20200513105632.06db9e21@titan.knownspace> <B1225914-43BC-44EF-A73E-D06B890229C6@yahoo.com> <20200611155545.55526f7c@ralga.knownspace> <5542B85D-1C3A-41D8-98CE-3C02E990C3EB@yahoo.com> <20200611164216.47f82775@ralga.knownspace> <DEA9A860-5DEE-49EE-97F1-DBDB39D5C0A3@yahoo.com> <DCB0BC72-1666-49F3-A838-B2A0D653A0C2@yahoo.com> <20200611212532.59f677be@ralga.knownspace> <1EDCA498-0B67-4374-B7CA-1ECDA8EE32AD@yahoo.com> <3605089E-7B5D-4FBA-B0D1-14B789BDF09B@yahoo.com> <CE56E7B6-7189-41BD-9384-6E492FEA85F3@yahoo.com> <F27CB198-2169-4FB2-AA67-F8244C7D39C5@yahoo.com>
next in thread | previous in thread | raw e-mail | index | archive | help
--MP_/W2YcurwYN3olD62YK3NF5fI Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Content-Disposition: inline (Removing hackers and current, too many cross-lists already, and those interested in reading this are probably already on ppc@) Mark, Can you try this updated patch? Again, I've only compiled it, I haven't tested it, so it may also explode. However, it more closely mimics exactly what moea64 does. - Justin --MP_/W2YcurwYN3olD62YK3NF5fI Content-Type: text/x-patch Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename=moea_protect.diff diff --git a/sys/powerpc/aim/mmu_oea.c b/sys/powerpc/aim/mmu_oea.c index c5b0b048a41..7d9181fe526 100644 --- a/sys/powerpc/aim/mmu_oea.c +++ b/sys/powerpc/aim/mmu_oea.c @@ -1767,6 +1767,62 @@ moea_pinit0(pmap_t pm) bzero(&pm->pm_stats, sizeof(pm->pm_stats)); } +static void +moea_pvo_protect(pmap_t pm, struct pvo_entry *pvo, vm_prot_t prot) +{ + struct pte *pt; + struct pte old_pte; + vm_page_t m; + int32_t refchg; + + /* + * Grab the PTE pointer before we diddle with the cached PTE + * copy. + */ + pt = moea_pvo_to_pte(pvo, -1); + + /* Cache old PTE for protection checks. */ + old_pte = pvo->pvo_pte.pte; + /* + * Change the protection of the page. + */ + pvo->pvo_pte.pte.pte_lo &= ~PTE_PP; + if ((prot & VM_PROT_WRITE) != VM_PROT_NONE) + pvo->pvo_pte.pte.pte_lo |= PTE_BW; + else + pvo->pvo_pte.pte.pte_lo |= PTE_BR; + + /* + * If the PVO is in the page table, update that pte as well. + */ + if (pt == NULL) { + refchg = (old_pte.pte_lo & PTE_BW) ? PTE_CHG : 0; + } else { + moea_pte_change(pt, &pvo->pvo_pte.pte, pvo->pvo_vaddr); + mtx_unlock(&moea_table_mutex); + refchg = (pt->pte_lo & (PTE_REF | PTE_CHG)); + } + + m = PHYS_TO_VM_PAGE(old_pte.pte_lo & PTE_RPGN); + if (pm != kernel_pmap && m != NULL && + (m->a.flags & PGA_EXECUTABLE) == 0 && + (pvo->pvo_pte.pa & (PTE_I | PTE_G)) == 0 && + (pm->pm_sr[PVO_VADDR(pvo) >> ADDR_SR_SHFT] & SR_N) == 0) { + if ((m->oflags & VPO_UNMANAGED) == 0) + vm_page_aflag_set(m, PGA_EXECUTABLE); + moea_syncicache(pvo->pvo_pte.pa & PTE_RPGN, + PAGE_SIZE); + } + if (m != NULL && (pvo->pvo_vaddr & PVO_MANAGED) && + (old_pte.pte_lo & PTE_BW)) { + refchg = atomic_readandclear_32(&m->md.mdpg_attrs); + if (refchg & PTE_CHG) + vm_page_dirty(m); + if (refchg & PTE_REF) + vm_page_aflag_set(m, PGA_REFERENCED); + } +} + /* * Set the physical protection on the specified range of this map as requested. */ @@ -1775,7 +1831,6 @@ moea_protect(pmap_t pm, vm_offset_t sva, vm_offset_t eva, vm_prot_t prot) { struct pvo_entry *pvo, *tpvo, key; - struct pte *pt; KASSERT(pm == &curproc->p_vmspace->vm_pmap || pm == kernel_pmap, ("moea_protect: non current pmap")); @@ -1791,25 +1846,7 @@ moea_protect(pmap_t pm, vm_offset_t sva, vm_offset_t eva, for (pvo = RB_NFIND(pvo_tree, &pm->pmap_pvo, &key); pvo != NULL && PVO_VADDR(pvo) < eva; pvo = tpvo) { tpvo = RB_NEXT(pvo_tree, &pm->pmap_pvo, pvo); - - /* - * Grab the PTE pointer before we diddle with the cached PTE - * copy. - */ - pt = moea_pvo_to_pte(pvo, -1); - /* - * Change the protection of the page. - */ - pvo->pvo_pte.pte.pte_lo &= ~PTE_PP; - pvo->pvo_pte.pte.pte_lo |= PTE_BR; - - /* - * If the PVO is in the page table, update that pte as well. - */ - if (pt != NULL) { - moea_pte_change(pt, &pvo->pvo_pte.pte, pvo->pvo_vaddr); - mtx_unlock(&moea_table_mutex); - } + moea_pvo_protect(pm, pvo, prot); } rw_wunlock(&pvh_global_lock); PMAP_UNLOCK(pm); --MP_/W2YcurwYN3olD62YK3NF5fI--
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20200616213205.05f365dd>