From owner-svn-src-head@freebsd.org Sun Jun 16 16:45:02 2019 Return-Path: Delivered-To: svn-src-head@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 9542515C3D91; Sun, 16 Jun 2019 16:45:02 +0000 (UTC) (envelope-from alc@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 346EC87FC2; Sun, 16 Jun 2019 16:45:02 +0000 (UTC) (envelope-from alc@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 0042519309; Sun, 16 Jun 2019 16:45:01 +0000 (UTC) (envelope-from alc@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id x5GGj1iu081586; Sun, 16 Jun 2019 16:45:01 GMT (envelope-from alc@FreeBSD.org) Received: (from alc@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id x5GGj12A081585; Sun, 16 Jun 2019 16:45:01 GMT (envelope-from alc@FreeBSD.org) Message-Id: <201906161645.x5GGj12A081585@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: alc set sender to alc@FreeBSD.org using -f From: Alan Cox Date: Sun, 16 Jun 2019 16:45:01 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r349117 - head/sys/arm64/arm64 X-SVN-Group: head X-SVN-Commit-Author: alc X-SVN-Commit-Paths: head/sys/arm64/arm64 X-SVN-Commit-Revision: 349117 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 346EC87FC2 X-Spamd-Bar: -- Authentication-Results: mx1.freebsd.org X-Spamd-Result: default: False [-2.98 / 15.00]; local_wl_from(0.00)[FreeBSD.org]; NEURAL_HAM_MEDIUM(-1.00)[-0.999,0]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; NEURAL_HAM_SHORT(-0.98)[-0.981,0]; ASN(0.00)[asn:11403, ipnet:2610:1c1:1::/48, country:US] X-BeenThere: svn-src-head@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: SVN commit messages for the src tree for head/-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 16 Jun 2019 16:45:02 -0000 Author: alc Date: Sun Jun 16 16:45:01 2019 New Revision: 349117 URL: https://svnweb.freebsd.org/changeset/base/349117 Log: Three enhancements to arm64's pmap_protect(): Implement protection changes on superpage mappings. Previously, a superpage mapping was unconditionally demoted by pmap_protect(), even if the protection change applied to the entire superpage mapping. Precompute the bit mask describing the protection changes rather than recomputing it for every page table entry that is changed. Skip page table entries that already have the requested protection changes in place. Reviewed by: andrew, kib MFC after: 10 days Differential Revision: https://reviews.freebsd.org/D20657 Modified: head/sys/arm64/arm64/pmap.c Modified: head/sys/arm64/arm64/pmap.c ============================================================================== --- head/sys/arm64/arm64/pmap.c Sun Jun 16 16:02:50 2019 (r349116) +++ head/sys/arm64/arm64/pmap.c Sun Jun 16 16:45:01 2019 (r349117) @@ -2729,6 +2729,51 @@ retry: } /* + * pmap_protect_l2: do the things to protect a 2MB page in a pmap + */ +static void +pmap_protect_l2(pmap_t pmap, pt_entry_t *l2, vm_offset_t sva, pt_entry_t nbits) +{ + pd_entry_t old_l2; + vm_page_t m, mt; + + PMAP_LOCK_ASSERT(pmap, MA_OWNED); + KASSERT((sva & L2_OFFSET) == 0, + ("pmap_protect_l2: sva is not 2mpage aligned")); + old_l2 = pmap_load(l2); + KASSERT((old_l2 & ATTR_DESCR_MASK) == L2_BLOCK, + ("pmap_protect_l2: L2e %lx is not a block mapping", old_l2)); + + /* + * Return if the L2 entry already has the desired access restrictions + * in place. + */ + if ((old_l2 | nbits) == old_l2) + return; + + /* + * When a dirty read/write superpage mapping is write protected, + * update the dirty field of each of the superpage's constituent 4KB + * pages. + */ + if ((nbits & ATTR_AP(ATTR_AP_RO)) != 0 && + (old_l2 & ATTR_SW_MANAGED) != 0 && + pmap_page_dirty(old_l2)) { + m = PHYS_TO_VM_PAGE(old_l2 & ~ATTR_MASK); + for (mt = m; mt < &m[L2_SIZE / PAGE_SIZE]; mt++) + vm_page_dirty(mt); + } + + pmap_set(l2, nbits); + + /* + * Since a promotion must break the 4KB page mappings before making + * the 2MB page mapping, a pmap_invalidate_page() suffices. + */ + pmap_invalidate_page(pmap, sva); +} + +/* * Set the physical protection on the * specified range of this map as requested. */ @@ -2745,8 +2790,12 @@ pmap_protect(pmap_t pmap, vm_offset_t sva, vm_offset_t return; } - if ((prot & (VM_PROT_WRITE | VM_PROT_EXECUTE)) == - (VM_PROT_WRITE | VM_PROT_EXECUTE)) + nbits = 0; + if ((prot & VM_PROT_WRITE) == 0) + nbits |= ATTR_AP(ATTR_AP_RO); + if ((prot & VM_PROT_EXECUTE) == 0) + nbits |= ATTR_XN; + if (nbits == 0) return; PMAP_LOCK(pmap); @@ -2777,9 +2826,11 @@ pmap_protect(pmap_t pmap, vm_offset_t sva, vm_offset_t continue; if ((pmap_load(l2) & ATTR_DESCR_MASK) == L2_BLOCK) { - l3p = pmap_demote_l2(pmap, l2, sva); - if (l3p == NULL) + if (sva + L2_SIZE == va_next && eva >= va_next) { + pmap_protect_l2(pmap, l2, sva, nbits); continue; + } else if (pmap_demote_l2(pmap, l2, sva) == NULL) + continue; } KASSERT((pmap_load(l2) & ATTR_DESCR_MASK) == L2_TABLE, ("pmap_protect: Invalid L2 entry after demotion")); @@ -2790,8 +2841,16 @@ pmap_protect(pmap_t pmap, vm_offset_t sva, vm_offset_t va = va_next; for (l3p = pmap_l2_to_l3(l2, sva); sva != va_next; l3p++, sva += L3_SIZE) { + /* + * Go to the next L3 entry if the current one is + * invalid or already has the desired access + * restrictions in place. (The latter case occurs + * frequently. For example, in a "buildworld" + * workload, almost 1 out of 4 L3 entries already + * have the desired restrictions.) + */ l3 = pmap_load(l3p); - if (!pmap_l3_valid(l3)) { + if (!pmap_l3_valid(l3) || (l3 | nbits) == l3) { if (va != va_next) { pmap_invalidate_range(pmap, va, sva); va = va_next; @@ -2801,17 +2860,14 @@ pmap_protect(pmap_t pmap, vm_offset_t sva, vm_offset_t if (va == va_next) va = sva; - nbits = 0; - if ((prot & VM_PROT_WRITE) == 0) { - if ((l3 & ATTR_SW_MANAGED) && - pmap_page_dirty(l3)) { - vm_page_dirty(PHYS_TO_VM_PAGE(l3 & - ~ATTR_MASK)); - } - nbits |= ATTR_AP(ATTR_AP_RO); - } - if ((prot & VM_PROT_EXECUTE) == 0) - nbits |= ATTR_XN; + /* + * When a dirty read/write mapping is write protected, + * update the page's dirty field. + */ + if ((nbits & ATTR_AP(ATTR_AP_RO)) != 0 && + (l3 & ATTR_SW_MANAGED) != 0 && + pmap_page_dirty(l3)) + vm_page_dirty(PHYS_TO_VM_PAGE(l3 & ~ATTR_MASK)); pmap_set(l3p, nbits); }