From owner-svn-src-head@FreeBSD.ORG Mon Aug 19 14:56:18 2013 Return-Path: Delivered-To: svn-src-head@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 7E609DE; Mon, 19 Aug 2013 14:56:18 +0000 (UTC) (envelope-from raj@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 6B4D223DC; Mon, 19 Aug 2013 14:56:18 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.7/8.14.7) with ESMTP id r7JEuIa6098510; Mon, 19 Aug 2013 14:56:18 GMT (envelope-from raj@svn.freebsd.org) Received: (from raj@localhost) by svn.freebsd.org (8.14.7/8.14.5/Submit) id r7JEuIwK098509; Mon, 19 Aug 2013 14:56:18 GMT (envelope-from raj@svn.freebsd.org) Message-Id: <201308191456.r7JEuIwK098509@svn.freebsd.org> From: Rafal Jaworowski Date: Mon, 19 Aug 2013 14:56:18 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r254531 - head/sys/arm/arm X-SVN-Group: head MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-head@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: SVN commit messages for the src tree for head/-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Aug 2013 14:56:18 -0000 Author: raj Date: Mon Aug 19 14:56:17 2013 New Revision: 254531 URL: http://svnweb.freebsd.org/changeset/base/254531 Log: Simplify pv_entry removal or ARMv6/v7: - PGA_WRITEABLE indicates that there *might be* a writable mapping for the particular page, so to avoid frequent sweeping of the pv_entries whenever pmap_nuke_pv(), pmap_modify_pv(), etc. is called, it is sufficient to clear that flag if there are no managed mappings for that page anymore (notice that only pmap_enter is authorized to set this flag). - Avoid redundant checking for PVF_WIRED flag when this flag cannot be set anyway. - Clear PGA_WRITEABLE only once for each vm_page instead of multiple, redundant clearing it in loop when there are no writeable mappings to that page anymore. Submitted by: Zbigniew Bodek Reviewed by: gber Sponsored by: The FreeBSD Foundation, Semihalf Modified: head/sys/arm/arm/pmap-v6.c Modified: head/sys/arm/arm/pmap-v6.c ============================================================================== --- head/sys/arm/arm/pmap-v6.c Mon Aug 19 14:42:39 2013 (r254530) +++ head/sys/arm/arm/pmap-v6.c Mon Aug 19 14:56:17 2013 (r254531) @@ -1072,39 +1072,22 @@ pmap_set_prot(pt_entry_t *ptep, vm_prot_ * => caller should NOT adjust pmap's wire_count * => we return the removed pve */ - -static void -pmap_nuke_pv(struct vm_page *m, pmap_t pmap, struct pv_entry *pve) -{ - - rw_assert(&pvh_global_lock, RA_WLOCKED); - PMAP_ASSERT_LOCKED(pmap); - - TAILQ_REMOVE(&m->md.pv_list, pve, pv_list); - - if (pve->pv_flags & PVF_WIRED) - --pmap->pm_stats.wired_count; - - if (pve->pv_flags & PVF_WRITE) { - TAILQ_FOREACH(pve, &m->md.pv_list, pv_list) - if (pve->pv_flags & PVF_WRITE) - break; - if (!pve) { - vm_page_aflag_clear(m, PGA_WRITEABLE); - } - } -} - static struct pv_entry * pmap_remove_pv(struct vm_page *m, pmap_t pmap, vm_offset_t va) { struct pv_entry *pve; rw_assert(&pvh_global_lock, RA_WLOCKED); + PMAP_ASSERT_LOCKED(pmap); pve = pmap_find_pv(m, pmap, va); /* find corresponding pve */ - if (pve != NULL) - pmap_nuke_pv(m, pmap, pve); + if (pve != NULL) { + TAILQ_REMOVE(&m->md.pv_list, pve, pv_list); + if (pve->pv_flags & PVF_WIRED) + --pmap->pm_stats.wired_count; + } + if (TAILQ_EMPTY(&m->md.pv_list)) + vm_page_aflag_clear(m, PGA_WRITEABLE); return(pve); /* return removed pve */ } @@ -1143,14 +1126,6 @@ pmap_modify_pv(struct vm_page *m, pmap_t else --pmap->pm_stats.wired_count; } - if ((oflags & PVF_WRITE) && !(flags & PVF_WRITE)) { - TAILQ_FOREACH(npv, &m->md.pv_list, pv_list) { - if (npv->pv_flags & PVF_WRITE) - break; - } - if (!npv) - vm_page_aflag_clear(m, PGA_WRITEABLE); - } return (oflags); } @@ -2062,7 +2037,9 @@ pmap_remove_pages(pmap_t pmap) pv_entry_count--; pmap->pm_stats.resident_count--; pc->pc_map[field] |= bitmask; - pmap_nuke_pv(m, pmap, pv); + TAILQ_REMOVE(&m->md.pv_list, pv, pv_list); + if (TAILQ_EMPTY(&m->md.pv_list)) + vm_page_aflag_clear(m, PGA_WRITEABLE); pmap_free_l2_bucket(pmap, l2b, 1); } } @@ -2458,7 +2435,9 @@ pmap_remove_all(vm_page_t m) PTE_SYNC(ptep); pmap_free_l2_bucket(pmap, l2b, 1); pmap->pm_stats.resident_count--; - pmap_nuke_pv(m, pmap, pv); + TAILQ_REMOVE(&m->md.pv_list, pv, pv_list); + if (pv->pv_flags & PVF_WIRED) + pmap->pm_stats.wired_count--; pmap_free_pv_entry(pmap, pv); PMAP_UNLOCK(pmap); } @@ -2469,6 +2448,7 @@ pmap_remove_all(vm_page_t m) else cpu_tlb_flushD(); } + vm_page_aflag_clear(m, PGA_WRITEABLE); rw_wunlock(&pvh_global_lock); } @@ -3338,7 +3318,9 @@ pmap_pv_reclaim(pmap_t locked_pmap) "va %x pte %x", va, *ptep)); *ptep = 0; PTE_SYNC(ptep); - pmap_nuke_pv(m, pmap, pv); + TAILQ_REMOVE(&m->md.pv_list, pv, pv_list); + if (TAILQ_EMPTY(&m->md.pv_list)) + vm_page_aflag_clear(m, PGA_WRITEABLE); pc->pc_map[field] |= 1UL << bit; freed++; }