From owner-freebsd-ppc@FreeBSD.ORG Sat Jul 7 17:39:24 2012 Return-Path: Delivered-To: freebsd-ppc@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id BD6DB106564A; Sat, 7 Jul 2012 17:39:24 +0000 (UTC) (envelope-from alc@rice.edu) Received: from mh11.mail.rice.edu (mh11.mail.rice.edu [128.42.199.30]) by mx1.freebsd.org (Postfix) with ESMTP id 897C68FC0A; Sat, 7 Jul 2012 17:39:24 +0000 (UTC) Received: from mh11.mail.rice.edu (localhost.localdomain [127.0.0.1]) by mh11.mail.rice.edu (Postfix) with ESMTP id 4B8434C01D1; Sat, 7 Jul 2012 12:39:20 -0500 (CDT) Received: from mh11.mail.rice.edu (localhost.localdomain [127.0.0.1]) by mh11.mail.rice.edu (Postfix) with ESMTP id 49EFF4C01D0; Sat, 7 Jul 2012 12:39:20 -0500 (CDT) X-Virus-Scanned: by amavis-2.7.0 at mh11.mail.rice.edu, auth channel Received: from mh11.mail.rice.edu ([127.0.0.1]) by mh11.mail.rice.edu (mh11.mail.rice.edu [127.0.0.1]) (amavis, port 10026) with ESMTP id 1C-PB2L6GB2n; Sat, 7 Jul 2012 12:39:20 -0500 (CDT) Received: from adsl-216-63-78-18.dsl.hstntx.swbell.net (adsl-216-63-78-18.dsl.hstntx.swbell.net [216.63.78.18]) (using TLSv1 with cipher RC4-MD5 (128/128 bits)) (No client certificate requested) (Authenticated sender: alc) by mh11.mail.rice.edu (Postfix) with ESMTPSA id 8F1AB4C01CA; Sat, 7 Jul 2012 12:39:19 -0500 (CDT) Message-ID: <4FF87446.2090903@rice.edu> Date: Sat, 07 Jul 2012 12:39:18 -0500 From: Alan Cox User-Agent: Mozilla/5.0 (X11; FreeBSD i386; rv:8.0) Gecko/20111113 Thunderbird/8.0 MIME-Version: 1.0 To: Justin Hibbits References: <20120707102004.3e874201@narn.knownspace> In-Reply-To: <20120707102004.3e874201@narn.knownspace> Content-Type: multipart/mixed; boundary="------------070807020805030306060404" Cc: alc@freebsd.org, freebsd-ppc@freebsd.org Subject: Re: Panic with latest pmap lock changes. X-BeenThere: freebsd-ppc@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Porting FreeBSD to the PowerPC List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 07 Jul 2012 17:39:24 -0000 This is a multi-part message in MIME format. --------------070807020805030306060404 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit On 07/07/2012 09:20, Justin Hibbits wrote: > Looks like I spoke too soon about the pmap lock changes working on my > G4. After about 24 hours of uptime, it panicked with the following: > > _rw_wlock_hard: recursing but non-recursive rw pmap pv global > @ /home/chmeee/freebsd/src/sys/powerpc/aim/mmu_oea.c:2301 > > I think the attached patch should fix it (Untested, except for > compiling). > Ugh. Sorry. The attached patch eliminates the lock recursion. While I was doing that, I noticed that the pmap_ts_referenced() implementations on powerpc have the wrong return type. Oddly, the comments in mmu_if.h have the return type correct, but the code two or three lines later has it wrong. Alan --------------070807020805030306060404 Content-Type: text/plain; name="powerpc_pmap.patch" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="powerpc_pmap.patch" Index: powerpc/powerpc/mmu_if.m =================================================================== --- powerpc/powerpc/mmu_if.m (revision 237346) +++ powerpc/powerpc/mmu_if.m (working copy) @@ -387,7 +387,7 @@ METHOD boolean_t is_referenced { * * @retval int count of referenced bits */ -METHOD boolean_t ts_referenced { +METHOD int ts_referenced { mmu_t _mmu; vm_page_t _pg; }; Index: powerpc/booke/pmap.c =================================================================== --- powerpc/booke/pmap.c (revision 237346) +++ powerpc/booke/pmap.c (working copy) @@ -286,7 +286,7 @@ static void mmu_booke_init(mmu_t); static boolean_t mmu_booke_is_modified(mmu_t, vm_page_t); static boolean_t mmu_booke_is_prefaultable(mmu_t, pmap_t, vm_offset_t); static boolean_t mmu_booke_is_referenced(mmu_t, vm_page_t); -static boolean_t mmu_booke_ts_referenced(mmu_t, vm_page_t); +static int mmu_booke_ts_referenced(mmu_t, vm_page_t); static vm_offset_t mmu_booke_map(mmu_t, vm_offset_t *, vm_paddr_t, vm_paddr_t, int); static int mmu_booke_mincore(mmu_t, pmap_t, vm_offset_t, Index: powerpc/aim/mmu_oea.c =================================================================== --- powerpc/aim/mmu_oea.c (revision 238159) +++ powerpc/aim/mmu_oea.c (working copy) @@ -300,7 +300,7 @@ void moea_init(mmu_t); boolean_t moea_is_modified(mmu_t, vm_page_t); boolean_t moea_is_prefaultable(mmu_t, pmap_t, vm_offset_t); boolean_t moea_is_referenced(mmu_t, vm_page_t); -boolean_t moea_ts_referenced(mmu_t, vm_page_t); +int moea_ts_referenced(mmu_t, vm_page_t); vm_offset_t moea_map(mmu_t, vm_offset_t *, vm_paddr_t, vm_paddr_t, int); boolean_t moea_page_exists_quick(mmu_t, pmap_t, vm_page_t); int moea_page_wired_mappings(mmu_t, vm_page_t); @@ -1269,15 +1269,20 @@ moea_init(mmu_t mmu) boolean_t moea_is_referenced(mmu_t mmu, vm_page_t m) { + boolean_t rv; KASSERT((m->oflags & VPO_UNMANAGED) == 0, ("moea_is_referenced: page %p is not managed", m)); - return (moea_query_bit(m, PTE_REF)); + rw_wlock(&pvh_global_lock); + rv = moea_query_bit(m, PTE_REF); + rw_wunlock(&pvh_global_lock); + return (rv); } boolean_t moea_is_modified(mmu_t mmu, vm_page_t m) { + boolean_t rv; KASSERT((m->oflags & VPO_UNMANAGED) == 0, ("moea_is_modified: page %p is not managed", m)); @@ -1291,7 +1296,10 @@ moea_is_modified(mmu_t mmu, vm_page_t m) if ((m->oflags & VPO_BUSY) == 0 && (m->aflags & PGA_WRITEABLE) == 0) return (FALSE); - return (moea_query_bit(m, PTE_CHG)); + rw_wlock(&pvh_global_lock); + rv = moea_query_bit(m, PTE_CHG); + rw_wunlock(&pvh_global_lock); + return (rv); } boolean_t @@ -1313,7 +1321,9 @@ moea_clear_reference(mmu_t mmu, vm_page_t m) KASSERT((m->oflags & VPO_UNMANAGED) == 0, ("moea_clear_reference: page %p is not managed", m)); + rw_wlock(&pvh_global_lock); moea_clear_bit(m, PTE_REF); + rw_wunlock(&pvh_global_lock); } void @@ -1333,7 +1343,9 @@ moea_clear_modify(mmu_t mmu, vm_page_t m) */ if ((m->aflags & PGA_WRITEABLE) == 0) return; + rw_wlock(&pvh_global_lock); moea_clear_bit(m, PTE_CHG); + rw_wunlock(&pvh_global_lock); } /* @@ -1400,13 +1412,17 @@ moea_remove_write(mmu_t mmu, vm_page_t m) * should be tested and standardized at some point in the future for * optimal aging of shared pages. */ -boolean_t +int moea_ts_referenced(mmu_t mmu, vm_page_t m) { + int count; KASSERT((m->oflags & VPO_UNMANAGED) == 0, ("moea_ts_referenced: page %p is not managed", m)); - return (moea_clear_bit(m, PTE_REF)); + rw_wlock(&pvh_global_lock); + count = moea_clear_bit(m, PTE_REF); + rw_wunlock(&pvh_global_lock); + return (count); } /* @@ -1816,7 +1832,7 @@ moea_remove_all(mmu_t mmu, vm_page_t m) moea_pvo_remove(pvo, -1); PMAP_UNLOCK(pmap); } - if ((m->aflags & PGA_WRITEABLE) && moea_is_modified(mmu, m)) { + if ((m->aflags & PGA_WRITEABLE) && moea_query_bit(m, PTE_CHG)) { moea_attr_clear(m, PTE_CHG); vm_page_dirty(m); } @@ -2293,10 +2309,10 @@ moea_query_bit(vm_page_t m, int ptebit) struct pvo_entry *pvo; struct pte *pt; + rw_assert(&pvh_global_lock, RA_WLOCKED); if (moea_attr_fetch(m) & ptebit) return (TRUE); - rw_wlock(&pvh_global_lock); LIST_FOREACH(pvo, vm_page_to_pvoh(m), pvo_vlink) { /* @@ -2305,7 +2321,6 @@ moea_query_bit(vm_page_t m, int ptebit) */ if (pvo->pvo_pte.pte.pte_lo & ptebit) { moea_attr_save(m, ptebit); - rw_wunlock(&pvh_global_lock); return (TRUE); } } @@ -2329,13 +2344,11 @@ moea_query_bit(vm_page_t m, int ptebit) mtx_unlock(&moea_table_mutex); if (pvo->pvo_pte.pte.pte_lo & ptebit) { moea_attr_save(m, ptebit); - rw_wunlock(&pvh_global_lock); return (TRUE); } } } - rw_wunlock(&pvh_global_lock); return (FALSE); } @@ -2346,7 +2359,7 @@ moea_clear_bit(vm_page_t m, int ptebit) struct pvo_entry *pvo; struct pte *pt; - rw_wlock(&pvh_global_lock); + rw_assert(&pvh_global_lock, RA_WLOCKED); /* * Clear the cached value. @@ -2380,7 +2393,6 @@ moea_clear_bit(vm_page_t m, int ptebit) pvo->pvo_pte.pte.pte_lo &= ~ptebit; } - rw_wunlock(&pvh_global_lock); return (count); } Index: powerpc/aim/mmu_oea64.c =================================================================== --- powerpc/aim/mmu_oea64.c (revision 237346) +++ powerpc/aim/mmu_oea64.c (working copy) @@ -305,7 +305,7 @@ void moea64_init(mmu_t); boolean_t moea64_is_modified(mmu_t, vm_page_t); boolean_t moea64_is_prefaultable(mmu_t, pmap_t, vm_offset_t); boolean_t moea64_is_referenced(mmu_t, vm_page_t); -boolean_t moea64_ts_referenced(mmu_t, vm_page_t); +int moea64_ts_referenced(mmu_t, vm_page_t); vm_offset_t moea64_map(mmu_t, vm_offset_t *, vm_paddr_t, vm_paddr_t, int); boolean_t moea64_page_exists_quick(mmu_t, pmap_t, vm_page_t); int moea64_page_wired_mappings(mmu_t, vm_page_t); @@ -1570,7 +1570,7 @@ moea64_remove_write(mmu_t mmu, vm_page_t m) * should be tested and standardized at some point in the future for * optimal aging of shared pages. */ -boolean_t +int moea64_ts_referenced(mmu_t mmu, vm_page_t m) { --------------070807020805030306060404--