From owner-svn-src-all@FreeBSD.ORG Thu Sep 6 16:26:05 2012 Return-Path: Delivered-To: svn-src-all@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 4BB08106564A; Thu, 6 Sep 2012 16:26:05 +0000 (UTC) (envelope-from alc@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:4f8:fff6::2c]) by mx1.freebsd.org (Postfix) with ESMTP id 366F38FC08; Thu, 6 Sep 2012 16:26:05 +0000 (UTC) Received: from svn.freebsd.org (localhost [127.0.0.1]) by svn.freebsd.org (8.14.4/8.14.4) with ESMTP id q86GQ5jp050015; Thu, 6 Sep 2012 16:26:05 GMT (envelope-from alc@svn.freebsd.org) Received: (from alc@localhost) by svn.freebsd.org (8.14.4/8.14.4/Submit) id q86GQ55u050012; Thu, 6 Sep 2012 16:26:05 GMT (envelope-from alc@svn.freebsd.org) Message-Id: <201209061626.q86GQ55u050012@svn.freebsd.org> From: Alan Cox Date: Thu, 6 Sep 2012 16:26:05 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org X-SVN-Group: head MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Cc: Subject: svn commit: r240166 - head/sys/arm/arm X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 06 Sep 2012 16:26:05 -0000 Author: alc Date: Thu Sep 6 16:26:04 2012 New Revision: 240166 URL: http://svn.freebsd.org/changeset/base/240166 Log: There is no need to release the pvh global lock around calls to pmap_get_pv_entry(). In fact, some callers already held it around calls. (In earlier versions, the same statements would apply to the page queues lock.) While I'm here tidy up the style of a few nearby statements and revise some comments. Tested by: Ian Lepore Modified: head/sys/arm/arm/pmap.c Modified: head/sys/arm/arm/pmap.c ============================================================================== --- head/sys/arm/arm/pmap.c Thu Sep 6 14:59:53 2012 (r240165) +++ head/sys/arm/arm/pmap.c Thu Sep 6 16:26:04 2012 (r240166) @@ -1584,13 +1584,13 @@ pmap_clearbit(struct vm_page *pg, u_int * pmap_remove_pv: remove a mappiing from a vm_page list * * NOTE: pmap_enter_pv expects to lock the pvh itself - * pmap_remove_pv expects te caller to lock the pvh before calling + * pmap_remove_pv expects the caller to lock the pvh before calling */ /* * pmap_enter_pv: enter a mapping onto a vm_page lst * - * => caller should hold the proper lock on pmap_main_lock + * => caller should hold the proper lock on pvh_global_lock * => caller should have pmap locked * => we will gain the lock on the vm_page and allocate the new pv_entry * => caller should adjust ptp's wire_count before calling @@ -1600,12 +1600,11 @@ static void pmap_enter_pv(struct vm_page *pg, struct pv_entry *pve, pmap_t pm, vm_offset_t va, u_int flags) { - int km; rw_assert(&pvh_global_lock, RA_WLOCKED); - if (pg->md.pv_kva) { + if (pg->md.pv_kva != 0) { /* PMAP_ASSERT_LOCKED(pmap_kernel()); */ pve->pv_pmap = pmap_kernel(); pve->pv_va = pg->md.pv_kva; @@ -1617,10 +1616,8 @@ pmap_enter_pv(struct vm_page *pg, struct TAILQ_INSERT_HEAD(&pg->md.pv_list, pve, pv_list); TAILQ_INSERT_HEAD(&pve->pv_pmap->pm_pvlist, pve, pv_plist); PMAP_UNLOCK(pmap_kernel()); - rw_wunlock(&pvh_global_lock); if ((pve = pmap_get_pv_entry()) == NULL) - panic("pmap_kenter_internal: no pv entries"); - rw_wlock(&pvh_global_lock); + panic("pmap_kenter_pv: no pv entries"); if (km) PMAP_LOCK(pmap_kernel()); } @@ -2824,22 +2821,20 @@ pmap_kenter_internal(vm_offset_t va, vm_ *pte |= L2_S_PROT_U; PTE_SYNC(pte); - /* kernel direct mappings can be shared, so use a pv_entry - * to ensure proper caching. - * - * The pvzone is used to delay the recording of kernel - * mappings until the VM is running. - * - * This expects the physical memory to have vm_page_array entry. - */ - if (pvzone != NULL && (m = vm_phys_paddr_to_vm_page(pa))) { + /* + * A kernel mapping may not be the page's only mapping, so create a PV + * entry to ensure proper caching. + * + * The existence test for the pvzone is used to delay the recording of + * kernel mappings until the VM system is fully initialized. + * + * This expects the physical memory to have a vm_page_array entry. + */ + if (pvzone != NULL && (m = vm_phys_paddr_to_vm_page(pa)) != NULL) { rw_wlock(&pvh_global_lock); - if (!TAILQ_EMPTY(&m->md.pv_list) || m->md.pv_kva) { - /* release vm_page lock for pv_entry UMA */ - rw_wunlock(&pvh_global_lock); + if (!TAILQ_EMPTY(&m->md.pv_list) || m->md.pv_kva != 0) { if ((pve = pmap_get_pv_entry()) == NULL) panic("pmap_kenter_internal: no pv entries"); - rw_wlock(&pvh_global_lock); PMAP_LOCK(pmap_kernel()); pmap_enter_pv(m, pve, pmap_kernel(), va, PVF_WRITE | PVF_UNMAN);