From owner-svn-src-all@freebsd.org Sat Jul 13 03:02:13 2019 Return-Path: Delivered-To: svn-src-all@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 0D36E15DDF8D; Sat, 13 Jul 2019 03:02:13 +0000 (UTC) (envelope-from jhibbits@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id C770368561; Sat, 13 Jul 2019 03:02:12 +0000 (UTC) (envelope-from jhibbits@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id A2BFC1FF0F; Sat, 13 Jul 2019 03:02:12 +0000 (UTC) (envelope-from jhibbits@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id x6D32CBw007468; Sat, 13 Jul 2019 03:02:12 GMT (envelope-from jhibbits@FreeBSD.org) Received: (from jhibbits@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id x6D32CEE007467; Sat, 13 Jul 2019 03:02:12 GMT (envelope-from jhibbits@FreeBSD.org) Message-Id: <201907130302.x6D32CEE007467@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: jhibbits set sender to jhibbits@FreeBSD.org using -f From: Justin Hibbits Date: Sat, 13 Jul 2019 03:02:12 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r349963 - head/sys/powerpc/aim X-SVN-Group: head X-SVN-Commit-Author: jhibbits X-SVN-Commit-Paths: head/sys/powerpc/aim X-SVN-Commit-Revision: 349963 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: C770368561 X-Spamd-Bar: -- Authentication-Results: mx1.freebsd.org X-Spamd-Result: default: False [-2.96 / 15.00]; local_wl_from(0.00)[FreeBSD.org]; NEURAL_HAM_MEDIUM(-1.00)[-1.000,0]; NEURAL_HAM_SHORT(-0.96)[-0.964,0]; ASN(0.00)[asn:11403, ipnet:2610:1c1:1::/48, country:US]; NEURAL_HAM_LONG(-1.00)[-1.000,0] X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 13 Jul 2019 03:02:13 -0000 Author: jhibbits Date: Sat Jul 13 03:02:11 2019 New Revision: 349963 URL: https://svnweb.freebsd.org/changeset/base/349963 Log: powerpc64/pmap: Reduce scope of PV_LOCK in remove path Summary: Since the 'page pv' lock is one of the most highly contended locks, we need to try to do as much work outside of the lock as we can. The moea64_pvo_remove_from_page() path is a low hanging fruit, where we can do some heavy work (PHYS_TO_VM_PAGE()) outside of the lock if needed. In one path, moea64_remove_all(), the PV lock is already held and can't be swizzled, so we provide two ways to perform the locked operation, one that can call PHYS_TO_VM_PAGE outside the lock, and one that calls with the lock already held. Reviewed By: luporl Differential Revision: https://reviews.freebsd.org/D20694 Modified: head/sys/powerpc/aim/mmu_oea64.c Modified: head/sys/powerpc/aim/mmu_oea64.c ============================================================================== --- head/sys/powerpc/aim/mmu_oea64.c Sat Jul 13 00:51:11 2019 (r349962) +++ head/sys/powerpc/aim/mmu_oea64.c Sat Jul 13 03:02:11 2019 (r349963) @@ -234,6 +234,8 @@ static int moea64_pvo_enter(mmu_t mmu, struct pvo_entr struct pvo_head *pvo_head); static void moea64_pvo_remove_from_pmap(mmu_t mmu, struct pvo_entry *pvo); static void moea64_pvo_remove_from_page(mmu_t mmu, struct pvo_entry *pvo); +static void moea64_pvo_remove_from_page_locked(mmu_t mmu, + struct pvo_entry *pvo); static struct pvo_entry *moea64_pvo_find_va(pmap_t, vm_offset_t); /* @@ -1454,9 +1456,7 @@ moea64_enter(mmu_t mmu, pmap_t pmap, vm_offset_t va, v /* Free any dead pages */ if (oldpvo != NULL) { - PV_LOCK(oldpvo->pvo_pte.pa & LPTE_RPGN); moea64_pvo_remove_from_page(mmu, oldpvo); - PV_UNLOCK(oldpvo->pvo_pte.pa & LPTE_RPGN); free_pvo_entry(oldpvo); } @@ -1877,9 +1877,7 @@ moea64_kenter_attr(mmu_t mmu, vm_offset_t va, vm_paddr /* Free any dead pages */ if (oldpvo != NULL) { - PV_LOCK(oldpvo->pvo_pte.pa & LPTE_RPGN); moea64_pvo_remove_from_page(mmu, oldpvo); - PV_UNLOCK(oldpvo->pvo_pte.pa & LPTE_RPGN); free_pvo_entry(oldpvo); } @@ -2386,9 +2384,7 @@ moea64_remove_pages(mmu_t mmu, pmap_t pm) PMAP_UNLOCK(pm); RB_FOREACH_SAFE(pvo, pvo_tree, &tofree, tpvo) { - PV_LOCK(pvo->pvo_pte.pa & LPTE_RPGN); moea64_pvo_remove_from_page(mmu, pvo); - PV_UNLOCK(pvo->pvo_pte.pa & LPTE_RPGN); RB_REMOVE(pvo_tree, &tofree, pvo); free_pvo_entry(pvo); } @@ -2429,9 +2425,7 @@ moea64_remove(mmu_t mmu, pmap_t pm, vm_offset_t sva, v PMAP_UNLOCK(pm); RB_FOREACH_SAFE(pvo, pvo_tree, &tofree, tpvo) { - PV_LOCK(pvo->pvo_pte.pa & LPTE_RPGN); moea64_pvo_remove_from_page(mmu, pvo); - PV_UNLOCK(pvo->pvo_pte.pa & LPTE_RPGN); RB_REMOVE(pvo_tree, &tofree, pvo); free_pvo_entry(pvo); } @@ -2458,7 +2452,7 @@ moea64_remove_all(mmu_t mmu, vm_page_t m) wasdead = (pvo->pvo_vaddr & PVO_DEAD); if (!wasdead) moea64_pvo_remove_from_pmap(mmu, pvo); - moea64_pvo_remove_from_page(mmu, pvo); + moea64_pvo_remove_from_page_locked(mmu, pvo); if (!wasdead) LIST_INSERT_HEAD(&freequeue, pvo, pvo_vlink); PMAP_UNLOCK(pmap); @@ -2631,10 +2625,10 @@ moea64_pvo_remove_from_pmap(mmu_t mmu, struct pvo_entr } } -static void -moea64_pvo_remove_from_page(mmu_t mmu, struct pvo_entry *pvo) +static inline void +_moea64_pvo_remove_from_page_locked(mmu_t mmu, struct pvo_entry *pvo, + vm_page_t m) { - struct vm_page *pg; KASSERT(pvo->pvo_vaddr & PVO_DEAD, ("Trying to delink live page")); @@ -2648,18 +2642,40 @@ moea64_pvo_remove_from_page(mmu_t mmu, struct pvo_entr */ PV_LOCKASSERT(pvo->pvo_pte.pa & LPTE_RPGN); if (pvo->pvo_vaddr & PVO_MANAGED) { - pg = PHYS_TO_VM_PAGE(pvo->pvo_pte.pa & LPTE_RPGN); - - if (pg != NULL) { + if (m != NULL) { LIST_REMOVE(pvo, pvo_vlink); - if (LIST_EMPTY(vm_page_to_pvoh(pg))) - vm_page_aflag_clear(pg, + if (LIST_EMPTY(vm_page_to_pvoh(m))) + vm_page_aflag_clear(m, PGA_WRITEABLE | PGA_EXECUTABLE); } } moea64_pvo_entries--; moea64_pvo_remove_calls++; +} + +static void +moea64_pvo_remove_from_page_locked(mmu_t mmu, struct pvo_entry *pvo) +{ + vm_page_t pg = NULL; + + if (pvo->pvo_vaddr & PVO_MANAGED) + pg = PHYS_TO_VM_PAGE(pvo->pvo_pte.pa & LPTE_RPGN); + + _moea64_pvo_remove_from_page_locked(mmu, pvo, pg); +} + +static void +moea64_pvo_remove_from_page(mmu_t mmu, struct pvo_entry *pvo) +{ + vm_page_t pg = NULL; + + if (pvo->pvo_vaddr & PVO_MANAGED) + pg = PHYS_TO_VM_PAGE(pvo->pvo_pte.pa & LPTE_RPGN); + + PV_LOCK(pvo->pvo_pte.pa & LPTE_RPGN); + _moea64_pvo_remove_from_page_locked(mmu, pvo, pg); + PV_UNLOCK(pvo->pvo_pte.pa & LPTE_RPGN); } static struct pvo_entry *