Date: Thu, 31 Jul 2014 13:34:57 -0500 From: Alan Cox <alc@rice.edu> To: Andreas Tobler <andreast@FreeBSD.org>, Alan Cox <alc@FreeBSD.org>, src-committers@FreeBSD.org, svn-src-all@FreeBSD.org, svn-src-head@FreeBSD.org Subject: Re: svn commit: r269134 - head/sys/vm Message-ID: <53DA8C51.6050103@rice.edu> In-Reply-To: <53DA05A8.3020400@FreeBSD.org> References: <201407261810.s6QIAIIj049439@svn.freebsd.org> <53D9406A.3060101@FreeBSD.org> <53D94BAF.8050102@rice.edu> <53D94D73.3050808@rice.edu> <53D95265.9010406@FreeBSD.org> <53D960E9.50905@rice.edu> <53D9630A.5050500@FreeBSD.org> <53D9C7A0.4010100@rice.edu> <53DA05A8.3020400@FreeBSD.org>
next in thread | previous in thread | raw e-mail | index | archive | help
This is a multi-part message in MIME format. --------------010407020806060306050209 Content-Type: text/plain; charset=ISO-8859-15 Content-Transfer-Encoding: 7bit On 07/31/2014 04:00, Andreas Tobler wrote: > On 31.07.14 06:35, Alan Cox wrote: >> On 07/30/2014 16:26, Andreas Tobler wrote: >>> On 30.07.14 23:17, Alan Cox wrote: >>>> On 07/30/2014 15:15, Andreas Tobler wrote: >>>>> On 30.07.14 21:54, Alan Cox wrote: >>>>>> On 07/30/2014 14:46, Alan Cox wrote: >>>>>>> On 07/30/2014 13:58, Andreas Tobler wrote: >>>>>>>> Hi Alan, >>>>>>>> >>>>>>>> On 26.07.14 20:10, Alan Cox wrote: >>>>>>>>> Author: alc >>>>>>>>> Date: Sat Jul 26 18:10:18 2014 >>>>>>>>> New Revision: 269134 >>>>>>>>> URL: http://svnweb.freebsd.org/changeset/base/269134 >>>>>>>>> >>>>>>>>> Log: >>>>>>>>> When unwiring a region of an address space, do not >>>>>>>>> assume that >>>>>>>>> the >>>>>>>>> underlying physical pages are mapped by the pmap. If, for >>>>>>>>> example, the >>>>>>>>> application has performed an mprotect(..., PROT_NONE) on >>>>>>>>> any part >>>>>>>>> of the >>>>>>>>> wired region, then those pages will no longer be mapped >>>>>>>>> by the >>>>>>>>> pmap. >>>>>>>>> So, using the pmap to lookup the wired pages in order to >>>>>>>>> unwire them >>>>>>>>> doesn't always work, and when it doesn't work wired >>>>>>>>> pages are >>>>>>>>> leaked. >>>>>>>>> >>>>>>>>> To avoid the leak, introduce and use a new function >>>>>>>>> vm_object_unwire() >>>>>>>>> that locates the wired pages by traversing the object >>>>>>>>> and its >>>>>>>>> backing >>>>>>>>> objects. >>>>>>>>> >>>>>>>>> At the same time, switch from using pmap_change_wiring() to >>>>>>>>> the >>>>>>>>> recently >>>>>>>>> introduced function pmap_unwire() for unwiring the region's >>>>>>>>> mappings. >>>>>>>>> pmap_unwire() is faster, because it operates a range of >>>>>>>>> virtual >>>>>>>>> addresses >>>>>>>>> rather than a single virtual page at a time. Moreover, by >>>>>>>>> operating on >>>>>>>>> a range, it is superpage friendly. It doesn't waste time >>>>>>>>> performing >>>>>>>>> unnecessary demotions. >>>>>>>>> >>>>>>>>> Reported by: markj >>>>>>>>> Reviewed by: kib >>>>>>>>> Tested by: pho, jmg (arm) >>>>>>>>> Sponsored by: EMC / Isilon Storage Division >>>>>>>> This commit brings my 32- and 64-bit PowerMac's into panic. >>>>>>>> Unfortunately I'm not able to give you a backtrace in the form >>>>>>>> of a >>>>>>>> textdump nor of a core dump. >>>>>>>> >>>>>>>> The only thing I have is this picture: >>>>>>>> >>>>>>>> http://people.freebsd.org/~andreast/r269134_panic.jpg >>>>>>>> >>>>>>>> Exactly this revision gives a panic and breaks the >>>>>>>> textdump/coredump >>>>>>>> facility. >>>>>>>> >>>>>>>> How can I help debugging? >>>>>>>> >>>>>>> It appears to me that moea64_pvo_enter() had a pre-existing bug >>>>>>> that >>>>>>> got >>>>>>> tickled by this change. Specifically, moea64_pvo_enter() >>>>>>> doesn't set >>>>>>> the PVO_WIRED flag when an unwired mapping already exists. It just >>>>>>> returns with the mapping still in an unwired state. Consequently, >>>>>>> when >>>>>>> pmap_unwire() finally runs, it doesn't find a wired mapping. >>>>>>> >>>>>>> Try this: >>>>>>> >>>>>>> Index: powerpc/aim/mmu_oea64.c >>>>>>> =================================================================== >>>>>>> --- powerpc/aim/mmu_oea64.c (revision 269127) >>>>>>> +++ powerpc/aim/mmu_oea64.c (working copy) >>>>>>> @@ -2274,7 +2274,8 @@ moea64_pvo_enter(mmu_t mmu, pmap_t pm, >>>>>>> uma_zone_t >>>>>>> if (pvo->pvo_pmap == pm && PVO_VADDR(pvo) == >>>>>>> va) { >>>>>>> if ((pvo->pvo_pte.lpte.pte_lo & >>>>>>> LPTE_RPGN) >>>>>>> == pa && >>>>>>> (pvo->pvo_pte.lpte.pte_lo & >>>>>>> (LPTE_NOEXEC | >>>>>>> LPTE_PP)) >>>>>>> - == (pte_lo & (LPTE_NOEXEC | >>>>>>> LPTE_PP))) { >>>>>>> + == (pte_lo & (LPTE_NOEXEC | >>>>>>> LPTE_PP)) && >>>>>>> + ((pvo->pvo_vaddr ^ flags) & >>>>>>> PVO_WIRED)) { >>>>>>> if (!(pvo->pvo_pte.lpte.pte_hi & >>>>>>> LPTE_VALID)) { >>>>>>> /* Re-insert if >>>>>>> spilled */ >>>>>>> i = >>>>>>> MOEA64_PTE_INSERT(mmu, >>>>>>> ptegidx, >>>>>>> >>>>>> >>>>>> The new conditional test needs to be inverted. Try this instead: >>>>>> >>>>>> Index: powerpc/aim/mmu_oea64.c >>>>>> =================================================================== >>>>>> --- powerpc/aim/mmu_oea64.c (revision 269127) >>>>>> +++ powerpc/aim/mmu_oea64.c (working copy) >>>>>> @@ -2274,7 +2274,8 @@ moea64_pvo_enter(mmu_t mmu, pmap_t pm, >>>>>> uma_zone_t >>>>>> if (pvo->pvo_pmap == pm && PVO_VADDR(pvo) == >>>>>> va) { >>>>>> if ((pvo->pvo_pte.lpte.pte_lo & >>>>>> LPTE_RPGN) >>>>>> == pa && >>>>>> (pvo->pvo_pte.lpte.pte_lo & >>>>>> (LPTE_NOEXEC | >>>>>> LPTE_PP)) >>>>>> - == (pte_lo & (LPTE_NOEXEC | LPTE_PP))) { >>>>>> + == (pte_lo & (LPTE_NOEXEC | LPTE_PP)) && >>>>>> + ((pvo->pvo_vaddr ^ flags) & >>>>>> PVO_WIRED) == >>>>>> 0) { >>>>>> if (!(pvo->pvo_pte.lpte.pte_hi & >>>>>> LPTE_VALID)) { >>>>>> /* Re-insert if >>>>>> spilled */ >>>>>> i = >>>>>> MOEA64_PTE_INSERT(mmu, >>>>>> ptegidx, >>>>>> >>>>> >>>>> >>>>> The panic stays, but the message is different: >>>>> >>>>> panic: moea64_pvo_to_pte: pvo 0x10147ea0 has invalid pte 0xb341180 in >>>>> moea64_pteg_table but valid in pvo. >>>>> >>>> >>>> My attempted fix is doing something else wrong. Do you have a stack >>>> trace? >>> >>> iPhone sei Dank: >>> >>> http://people.freebsd.org/~andreast/r269134-1_panic.jpg >> >> Ok, this patch should fix both the original panic and the new one. They >> are two distinct problems. > > Yep, thank you! > > Additionally I tried to adapt the 32-bit path and successfully booted > the below, ok? > > Again, thanks a lot! > Andreas > > Index: powerpc/aim/mmu_oea.c > =================================================================== > --- powerpc/aim/mmu_oea.c (revision 269326) > +++ powerpc/aim/mmu_oea.c (working copy) > @@ -1950,7 +1950,8 @@ > if (pvo->pvo_pmap == pm && PVO_VADDR(pvo) == va) { > if ((pvo->pvo_pte.pte.pte_lo & PTE_RPGN) == pa && > (pvo->pvo_pte.pte.pte_lo & PTE_PP) == > - (pte_lo & PTE_PP)) { > + (pte_lo & PTE_PP) && > + ((pvo->pvo_vaddr ^ flags) & PVO_WIRED) == 0) { > mtx_unlock(&moea_table_mutex); > return (0); > } > > Here is a better fix for the problem in moea64_pvo_enter(). The original fix destroys and recreates the PTE in order to wire it. This new fix simply updates the PTE. In the case of moea_pvo_enter(), there is also no need to destroy and recreate the PTE. --------------010407020806060306050209 Content-Type: text/plain; charset=ISO-8859-15; name="moeaX_pvo_enter_fix.patch" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="moeaX_pvo_enter_fix.patch" Index: powerpc/aim/mmu_oea.c =================================================================== --- powerpc/aim/mmu_oea.c (revision 269127) +++ powerpc/aim/mmu_oea.c (working copy) @@ -1951,7 +1951,21 @@ moea_pvo_enter(pmap_t pm, uma_zone_t zone, struct if ((pvo->pvo_pte.pte.pte_lo & PTE_RPGN) == pa && (pvo->pvo_pte.pte.pte_lo & PTE_PP) == (pte_lo & PTE_PP)) { + /* + * The PTE is not changing. Instead, this may + * be a request to change the mapping's wired + * attribute. + */ mtx_unlock(&moea_table_mutex); + if ((flags & PVO_WIRED) != 0 && + (pvo->pvo_vaddr & PVO_WIRED) == 0) { + pvo->pvo_vaddr |= PVO_WIRED; + pm->pm_stats.wired_count++; + } else if ((flags & PVO_WIRED) == 0 && + (pvo->pvo_vaddr & PVO_WIRED) != 0) { + pvo->pvo_vaddr &= ~PVO_WIRED; + pm->pm_stats.wired_count--; + } return (0); } moea_pvo_remove(pvo, -1); Index: powerpc/aim/mmu_oea64.c =================================================================== --- powerpc/aim/mmu_oea64.c (revision 269339) +++ powerpc/aim/mmu_oea64.c (working copy) @@ -2234,6 +2234,7 @@ moea64_pvo_enter(mmu_t mmu, pmap_t pm, uma_zone_t uint64_t pte_lo, int flags) { struct pvo_entry *pvo; + uintptr_t pt; uint64_t vsid; int first; u_int ptegidx; @@ -2276,7 +2277,28 @@ moea64_pvo_enter(mmu_t mmu, pmap_t pm, uma_zone_t if ((pvo->pvo_pte.lpte.pte_lo & LPTE_RPGN) == pa && (pvo->pvo_pte.lpte.pte_lo & (LPTE_NOEXEC | LPTE_PP)) == (pte_lo & (LPTE_NOEXEC | LPTE_PP))) { + /* + * The physical page and protection are not + * changing. Instead, this may be a request + * to change the mapping's wired attribute. + */ + pt = -1; + if ((flags & PVO_WIRED) != 0 && + (pvo->pvo_vaddr & PVO_WIRED) == 0) { + pt = MOEA64_PVO_TO_PTE(mmu, pvo); + pvo->pvo_vaddr |= PVO_WIRED; + pvo->pvo_pte.lpte.pte_hi |= LPTE_WIRED; + pm->pm_stats.wired_count++; + } else if ((flags & PVO_WIRED) == 0 && + (pvo->pvo_vaddr & PVO_WIRED) != 0) { + pt = MOEA64_PVO_TO_PTE(mmu, pvo); + pvo->pvo_vaddr &= ~PVO_WIRED; + pvo->pvo_pte.lpte.pte_hi &= ~LPTE_WIRED; + pm->pm_stats.wired_count--; + } if (!(pvo->pvo_pte.lpte.pte_hi & LPTE_VALID)) { + KASSERT(pt == -1, + ("moea64_pvo_enter: valid pt")); /* Re-insert if spilled */ i = MOEA64_PTE_INSERT(mmu, ptegidx, &pvo->pvo_pte.lpte); @@ -2283,6 +2305,14 @@ moea64_pvo_enter(mmu_t mmu, pmap_t pm, uma_zone_t if (i >= 0) PVO_PTEGIDX_SET(pvo, i); moea64_pte_overflow--; + } else if (pt != -1) { + /* + * The PTE's wired attribute is not a + * hardware feature, so there is no + * need to invalidate any TLB entries. + */ + MOEA64_PTE_CHANGE(mmu, pt, + &pvo->pvo_pte.lpte, pvo->pvo_vpn); } return (0); } --------------010407020806060306050209--
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?53DA8C51.6050103>