Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 11 Jun 2020 15:55:45 -0500
From:      Justin Hibbits <chmeeedalf@gmail.com>
To:        Mark Millard <marklmi@yahoo.com>
Cc:        "vangyzen@freebsd.org" <vangyzen@FreeBSD.org>, svn-src-head@freebsd.org, FreeBSD Current <freebsd-current@freebsd.org>, FreeBSD Hackers <freebsd-hackers@freebsd.org>, FreeBSD PowerPC ML <freebsd-ppc@freebsd.org>, Brandon Bergren <bdragon@FreeBSD.org>
Subject:   Re: svn commit: r360233 - in head: contrib/jemalloc . . . : This partially breaks a 2-socket 32-bit powerpc (old PowerMac G4) based on head -r360311
Message-ID:  <20200611155545.55526f7c@ralga.knownspace>
In-Reply-To: <B1225914-43BC-44EF-A73E-D06B890229C6@yahoo.com>
References:  <C24EE1A1-FAED-42C2-8204-CA7B1D20A369@yahoo.com> <8479DD58-44F6-446A-9CA5-D01F0F7C1B38@yahoo.com> <17ACDA02-D7EF-4F26-874A-BB3E935CD072@yahoo.com> <695E6836-F860-4557-B7DE-CC1EDB347F18@yahoo.com> <DCABCD83-27B0-4F2D-9410-69102294A98E@yahoo.com> <121B9B09-141B-4DC3-918B-1E7CFB99E779@yahoo.com> <8AAB0462-3FA8-490C-8D8D-7C15B1C9E2DE@yahoo.com> <18E62746-80DB-4195-977D-4FF32D0129EE@yahoo.com> <F5953A6B-56CE-4D1C-8C18-58D44B639881@yahoo.com> <D0C483E5-3F6A-4816-A6BA-3D2C82C24F8E@yahoo.com> <C440956F-139E-4EF7-A68E-FE35D9934BD3@yahoo.com> <9562EEE4-62EF-4164-91C0-948CC0432984@yahoo.com> <9B68839B-AEC8-43EE-B3B6-B696A4A57DAE@yahoo.com> <359C9C7D-4106-42B5-AAB5-08EF995B8100@yahoo.com> <20200513105632.06db9e21@titan.knownspace> <B1225914-43BC-44EF-A73E-D06B890229C6@yahoo.com>

next in thread | previous in thread | raw e-mail | index | archive | help
--MP_/K=rH3L.wMYzt5iz2sACaWQM
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

On Wed, 10 Jun 2020 18:56:57 -0700
Mark Millard <marklmi@yahoo.com> wrote:

> On 2020-May-13, at 08:56, Justin Hibbits <chmeeedalf@gmail.com> wrote:
> 
> > Hi Mark,  
> 
> Hello Justin.

Hi Mark,


> 
> > On Wed, 13 May 2020 01:43:23 -0700
> > Mark Millard <marklmi@yahoo.com> wrote:
> >   
> >> [I'm adding a reference to an old arm64/aarch64 bug that had
> >> pages turning to zero, in case this 32-bit powerpc issue is
> >> somewhat analogous.]
> >>   
> >>> . . .  
> > ...  
> >> . . .
> >> 
> >> (Note: dsl-only.net closed down, so the E-mail
> >> address reference is no longer valid.)
> >> 
> >> Author: kib
> >> Date: Mon Apr 10 15:32:26 2017
> >> New Revision: 316679
> >> URL: 
> >> https://svnweb.freebsd.org/changeset/base/316679
> >> 
> >> 
> >> Log:
> >> Do not lose dirty bits for removing PROT_WRITE on arm64.
> >> 
> >> Arm64 pmap interprets accessed writable ptes as modified, since
> >> ARMv8.0 does not track Dirty Bit Modifier in hardware. If writable
> >> bit is removed, page must be marked as dirty for MI VM.
> >> 
> >> This change is most important for COW, where fork caused losing
> >> content of the dirty pages which were not yet scanned by
> >> pagedaemon.
> >> 
> >> Reviewed by:	alc, andrew
> >> Reported and tested by:	Mark Millard <markmi at
> >> dsl-only.net> PR:	217138, 217239
> >> Sponsored by:	The FreeBSD Foundation
> >> MFC after:	2 weeks
> >> 
> >> Modified:
> >> head/sys/arm64/arm64/pmap.c
> >> 
> >> Modified: head/sys/arm64/arm64/pmap.c
> >> ==============================================================================
> >> --- head/sys/arm64/arm64/pmap.c	Mon Apr 10 12:35:58
> >> 2017	(r316678) +++ head/sys/arm64/arm64/pmap.c	Mon
> >> Apr 10 15:32:26 2017	(r316679) @@ -2481,6 +2481,11 @@
> >> pmap_protect(pmap_t pmap, vm_offset_t sv sva += L3_SIZE) {
> >> 			l3 = pmap_load(l3p);
> >> 			if (pmap_l3_valid(l3)) {
> >> +				if ((l3 & ATTR_SW_MANAGED) &&
> >> +				    pmap_page_dirty(l3)) {
> >> +
> >> vm_page_dirty(PHYS_TO_VM_PAGE(l3 &
> >> +					    ~ATTR_MASK));
> >> +				}
> >> 				pmap_set(l3p, ATTR_AP(ATTR_AP_RO));
> >> 				PTE_SYNC(l3p);
> >> 				/* XXX: Use pmap_invalidate_range
> >> */
> >> 
> >> . . .
> >>   
> > 
> > Thanks for this reference.  I took a quick look at the 3 pmap
> > implementations we have (haven't check the new radix pmap yet), and
> > it looks like only mmu_oea.c (32-bit AIM pmap, for G3 and G4) is
> > missing vm_page_dirty() calls in its pmap_protect() implementation,
> > analogous to the change you posted right above. Given this, I think
> > it's safe to say that this missing piece is necessary.  We'll work
> > on a fix for this; looking at moea64_protect(), there may be
> > additional work needed to support this as well, so it may take a
> > few days.  
> 
> Ping? Any clue when the above might happen?
> 
> I've been avoiding the old PowerMacs and leaving
> them at head -r360311 , pending an update that
> would avoid the kernel zeroing pages that it
> should not zero. But I've seen that you were busy
> with more modern contexts this last about a month.
> 
> And, clearly, my own context has left pending
> (for much longer) other more involved activities
> (compared to just periodically updating to
> more recent FreeBSD vintages).
> 
> ===
> Mark Millard
> marklmi at yahoo.com
> ( dsl-only.net went
> away in early 2018-Mar)
> 

Sorry for the delay, I got sidetracked with a bunch of other
development.  I did install a newer FreeBSD on my dual G4 and couldn't
see the problem.  That said, the attached patch effectively copies
what's done in OEA6464 into OEA pmap.  Can you test it?

- Justin

--MP_/K=rH3L.wMYzt5iz2sACaWQM
Content-Type: text/x-patch
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename=moea_protect.diff

diff --git a/sys/powerpc/aim/mmu_oea.c b/sys/powerpc/aim/mmu_oea.c
index c5b0b048a41..2f1422b36c4 100644
--- a/sys/powerpc/aim/mmu_oea.c
+++ b/sys/powerpc/aim/mmu_oea.c
@@ -1776,6 +1776,9 @@ moea_protect(pmap_t pm, vm_offset_t sva, vm_offset_t eva,
 {
 	struct	pvo_entry *pvo, *tpvo, key;
 	struct	pte *pt;
+	struct	pte old_pte;
+	vm_page_t	m;
+	int32_t	refchg;
 
 	KASSERT(pm == &curproc->p_vmspace->vm_pmap || pm == kernel_pmap,
 	    ("moea_protect: non current pmap"));
@@ -1803,12 +1806,31 @@ moea_protect(pmap_t pm, vm_offset_t sva, vm_offset_t eva,
 		pvo->pvo_pte.pte.pte_lo &= ~PTE_PP;
 		pvo->pvo_pte.pte.pte_lo |= PTE_BR;
 
+		old_pte = *pt;
+
 		/*
 		 * If the PVO is in the page table, update that pte as well.
 		 */
 		if (pt != NULL) {
 			moea_pte_change(pt, &pvo->pvo_pte.pte, pvo->pvo_vaddr);
+			if (pm != kernel_pmap && m != NULL &&
+			    (m->a.flags & PGA_EXECUTABLE) == 0 &&
+			    (pvo->pvo_pte.pa & (PTE_I | PTE_G)) == 0) {
+				if ((m->oflags & VPO_UNMANAGED) == 0)
+					vm_page_aflag_set(m, PGA_EXECUTABLE);
+				moea_syncicache(pvo->pvo_pte.pa & PTE_RPGN,
+				    PAGE_SIZE);
+			}
 			mtx_unlock(&moea_table_mutex);
+			if ((pvo->pvo_vaddr & PVO_MANAGED) &&
+			    (pvo->pvo_pte.prot & VM_PROT_WRITE)) {
+				m = PHYS_TO_VM_PAGE(old_pte.pte_lo & PTE_RPGN);
+				refchg = atomic_readandclear_32(&m->md.mdpg_attrs);
+				if (refchg & PTE_CHG)
+					vm_page_dirty(m);
+				if (refchg & PTE_REF)
+					vm_page_aflag_set(m, PGA_REFERENCED);
+			}
 		}
 	}
 	rw_wunlock(&pvh_global_lock);

--MP_/K=rH3L.wMYzt5iz2sACaWQM--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20200611155545.55526f7c>