From owner-freebsd-arm@FreeBSD.ORG Sat Sep 8 17:52:01 2012 Return-Path: Delivered-To: arm@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B943D106564A for ; Sat, 8 Sep 2012 17:52:01 +0000 (UTC) (envelope-from alc@rice.edu) Received: from mh10.mail.rice.edu (mh10.mail.rice.edu [128.42.201.30]) by mx1.freebsd.org (Postfix) with ESMTP id 854268FC12 for ; Sat, 8 Sep 2012 17:52:01 +0000 (UTC) Received: from mh10.mail.rice.edu (localhost.localdomain [127.0.0.1]) by mh10.mail.rice.edu (Postfix) with ESMTP id 8D2DC672EA; Sat, 8 Sep 2012 12:52:00 -0500 (CDT) Received: from mh10.mail.rice.edu (localhost.localdomain [127.0.0.1]) by mh10.mail.rice.edu (Postfix) with ESMTP id 8BB71672E9; Sat, 8 Sep 2012 12:52:00 -0500 (CDT) X-Virus-Scanned: by amavis-2.7.0 at mh10.mail.rice.edu, auth channel Received: from mh10.mail.rice.edu ([127.0.0.1]) by mh10.mail.rice.edu (mh10.mail.rice.edu [127.0.0.1]) (amavis, port 10026) with ESMTP id uXoyBJfRjKBB; Sat, 8 Sep 2012 12:52:00 -0500 (CDT) Received: from adsl-216-63-78-18.dsl.hstntx.swbell.net (adsl-216-63-78-18.dsl.hstntx.swbell.net [216.63.78.18]) (using TLSv1 with cipher RC4-MD5 (128/128 bits)) (No client certificate requested) (Authenticated sender: alc) by mh10.mail.rice.edu (Postfix) with ESMTPSA id 185F76050E; Sat, 8 Sep 2012 12:51:59 -0500 (CDT) Message-ID: <504B85BE.3030101@rice.edu> Date: Sat, 08 Sep 2012 12:51:58 -0500 From: Alan Cox User-Agent: Mozilla/5.0 (X11; FreeBSD i386; rv:8.0) Gecko/20111113 Thunderbird/8.0 MIME-Version: 1.0 To: Ian Lepore References: <502FD67A.7030003@rice.edu> <1345315508.27688.260.camel@revolution.hippie.lan> <503D12AE.1050705@rice.edu> <1346350374.1140.525.camel@revolution.hippie.lan> <5045351F.6060201@rice.edu> <1346723041.1140.602.camel@revolution.hippie.lan> In-Reply-To: <1346723041.1140.602.camel@revolution.hippie.lan> Content-Type: multipart/mixed; boundary="------------090708090103090700060808" Cc: "arm@freebsd.org" , Alan Cox Subject: Re: arm pmap locking X-BeenThere: freebsd-arm@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Porting FreeBSD to the StrongARM Processor List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 08 Sep 2012 17:52:01 -0000 This is a multi-part message in MIME format. --------------090708090103090700060808 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit On 09/03/2012 20:44, Ian Lepore wrote: > On Mon, 2012-09-03 at 17:54 -0500, Alan Cox wrote: >> On 08/30/2012 13:12, Ian Lepore wrote: >>> On Tue, 2012-08-28 at 13:49 -0500, Alan Cox wrote: >>>> Can you please retry with the attached patch? For the time being, I >>>> decided to address the above problem by simply enabling recursion on the >>>> new pmap lock. As I mentioned in my prior message, the lock recursion >>>> in the arm pmap is a mistake. However, I'd rather not change two things >>>> at once, i.e., replace the page queues lock and fix the lock recursion. >>>> I'll take a look at eliminating the lock recursion later this week. >>>> >>>> Thanks, >>>> Alan >>>> >>> Sorry for the delay, I finally got around to trying this today, and it >>> seems to be working well initially -- it boots to multiuser and the only >>> difference in the dmesg.boot with and without the patch is the compile >>> date, and the kernel image is 128 bytes smaller with the patch. I've >>> got DIAGNOSTIC and INVARIANTS enabled; I'll run with the patch in place >>> and let you know if anything glitches. >>> >> Could you please test the attached patch? This is a small step toward >> disentangling the arm pmap locking. >> >> Alan >> > Applied the patch, it's running just fine. > Here is another patch. This simplifies the kernel pmap locking in pmap_enter_pv() and corrects some comments. Thanks in advance, Alan --------------090708090103090700060808 Content-Type: text/plain; name="arm_pmap13.patch" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="arm_pmap13.patch" Index: arm/arm/pmap.c =================================================================== --- arm/arm/pmap.c (revision 240166) +++ arm/arm/pmap.c (working copy) @@ -1588,11 +1588,11 @@ pmap_clearbit(struct vm_page *pg, u_int maskbits) */ /* - * pmap_enter_pv: enter a mapping onto a vm_page lst + * pmap_enter_pv: enter a mapping onto a vm_page's PV list * * => caller should hold the proper lock on pvh_global_lock * => caller should have pmap locked - * => we will gain the lock on the vm_page and allocate the new pv_entry + * => we will (someday) gain the lock on the vm_page's PV list * => caller should adjust ptp's wire_count before calling * => caller should not adjust pmap's wire_count */ @@ -1600,33 +1600,26 @@ static void pmap_enter_pv(struct vm_page *pg, struct pv_entry *pve, pmap_t pm, vm_offset_t va, u_int flags) { - int km; rw_assert(&pvh_global_lock, RA_WLOCKED); - + PMAP_ASSERT_LOCKED(pm); if (pg->md.pv_kva != 0) { - /* PMAP_ASSERT_LOCKED(pmap_kernel()); */ - pve->pv_pmap = pmap_kernel(); + pve->pv_pmap = kernel_pmap; pve->pv_va = pg->md.pv_kva; pve->pv_flags = PVF_WRITE | PVF_UNMAN; + if (pm != kernel_pmap) + PMAP_LOCK(kernel_pmap); + TAILQ_INSERT_HEAD(&pg->md.pv_list, pve, pv_list); + TAILQ_INSERT_HEAD(&kernel_pmap->pm_pvlist, pve, pv_plist); + if (pm != kernel_pmap) + PMAP_UNLOCK(kernel_pmap); pg->md.pv_kva = 0; - - if (!(km = PMAP_OWNED(pmap_kernel()))) - PMAP_LOCK(pmap_kernel()); - TAILQ_INSERT_HEAD(&pg->md.pv_list, pve, pv_list); - TAILQ_INSERT_HEAD(&pve->pv_pmap->pm_pvlist, pve, pv_plist); - PMAP_UNLOCK(pmap_kernel()); if ((pve = pmap_get_pv_entry()) == NULL) panic("pmap_kenter_pv: no pv entries"); - if (km) - PMAP_LOCK(pmap_kernel()); } - - PMAP_ASSERT_LOCKED(pm); pve->pv_pmap = pm; pve->pv_va = va; pve->pv_flags = flags; - TAILQ_INSERT_HEAD(&pg->md.pv_list, pve, pv_list); TAILQ_INSERT_HEAD(&pm->pm_pvlist, pve, pv_plist); pg->md.pvh_attrs |= flags & (PVF_REF | PVF_MOD); --------------090708090103090700060808--