From owner-svn-src-stable@freebsd.org Thu Jul 19 22:53:24 2018 Return-Path: Delivered-To: svn-src-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id DF882104F189; Thu, 19 Jul 2018 22:53:23 +0000 (UTC) (envelope-from markj@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 9022C809D9; Thu, 19 Jul 2018 22:53:23 +0000 (UTC) (envelope-from markj@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 71CF82FA0B; Thu, 19 Jul 2018 22:53:23 +0000 (UTC) (envelope-from markj@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id w6JMrN1Q023971; Thu, 19 Jul 2018 22:53:23 GMT (envelope-from markj@FreeBSD.org) Received: (from markj@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id w6JMrNZn023970; Thu, 19 Jul 2018 22:53:23 GMT (envelope-from markj@FreeBSD.org) Message-Id: <201807192253.w6JMrNZn023970@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: markj set sender to markj@FreeBSD.org using -f From: Mark Johnston Date: Thu, 19 Jul 2018 22:53:23 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-stable@freebsd.org, svn-src-stable-11@freebsd.org Subject: svn commit: r336523 - stable/11/sys/amd64/amd64 X-SVN-Group: stable-11 X-SVN-Commit-Author: markj X-SVN-Commit-Paths: stable/11/sys/amd64/amd64 X-SVN-Commit-Revision: 336523 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-stable@freebsd.org X-Mailman-Version: 2.1.27 Precedence: list List-Id: SVN commit messages for all the -stable branches of the src tree List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Jul 2018 22:53:24 -0000 Author: markj Date: Thu Jul 19 22:53:23 2018 New Revision: 336523 URL: https://svnweb.freebsd.org/changeset/base/336523 Log: MFC r335784, r335971: Invalidate the mapping before updating its physical address. Modified: stable/11/sys/amd64/amd64/pmap.c Directory Properties: stable/11/ (props changed) Modified: stable/11/sys/amd64/amd64/pmap.c ============================================================================== --- stable/11/sys/amd64/amd64/pmap.c Thu Jul 19 22:45:49 2018 (r336522) +++ stable/11/sys/amd64/amd64/pmap.c Thu Jul 19 22:53:23 2018 (r336523) @@ -4772,6 +4772,7 @@ retry: panic("pmap_enter: invalid page directory va=%#lx", va); origpte = *pte; + pv = NULL; /* * Is the specified virtual address already mapped? @@ -4813,6 +4814,45 @@ retry: goto unchanged; goto validate; } + + /* + * The physical page has changed. Temporarily invalidate + * the mapping. This ensures that all threads sharing the + * pmap keep a consistent view of the mapping, which is + * necessary for the correct handling of COW faults. It + * also permits reuse of the old mapping's PV entry, + * avoiding an allocation. + * + * For consistency, handle unmanaged mappings the same way. + */ + origpte = pte_load_clear(pte); + KASSERT((origpte & PG_FRAME) == opa, + ("pmap_enter: unexpected pa update for %#lx", va)); + if ((origpte & PG_MANAGED) != 0) { + om = PHYS_TO_VM_PAGE(opa); + + /* + * The pmap lock is sufficient to synchronize with + * concurrent calls to pmap_page_test_mappings() and + * pmap_ts_referenced(). + */ + if ((origpte & (PG_M | PG_RW)) == (PG_M | PG_RW)) + vm_page_dirty(om); + if ((origpte & PG_A) != 0) + vm_page_aflag_set(om, PGA_REFERENCED); + CHANGE_PV_LIST_LOCK_TO_PHYS(&lock, opa); + pv = pmap_pvh_remove(&om->md, pmap, va); + if ((newpte & PG_MANAGED) == 0) + free_pv_entry(pmap, pv); + if ((om->aflags & PGA_WRITEABLE) != 0 && + TAILQ_EMPTY(&om->md.pv_list) && + ((om->flags & PG_FICTITIOUS) != 0 || + TAILQ_EMPTY(&pa_to_pvh(opa)->pv_list))) + vm_page_aflag_clear(om, PGA_WRITEABLE); + } + if ((origpte & PG_A) != 0) + pmap_invalidate_page(pmap, va); + origpte = 0; } else { /* * Increment the counters. @@ -4826,8 +4866,10 @@ retry: * Enter on the PV list if part of our managed memory. */ if ((newpte & PG_MANAGED) != 0) { - pv = get_pv_entry(pmap, &lock); - pv->pv_va = va; + if (pv == NULL) { + pv = get_pv_entry(pmap, &lock); + pv->pv_va = va; + } CHANGE_PV_LIST_LOCK_TO_PHYS(&lock, pa); TAILQ_INSERT_TAIL(&m->md.pv_list, pv, pv_next); m->md.pv_gen++; @@ -4841,25 +4883,10 @@ retry: if ((origpte & PG_V) != 0) { validate: origpte = pte_load_store(pte, newpte); - opa = origpte & PG_FRAME; - if (opa != pa) { - if ((origpte & PG_MANAGED) != 0) { - om = PHYS_TO_VM_PAGE(opa); - if ((origpte & (PG_M | PG_RW)) == (PG_M | - PG_RW)) - vm_page_dirty(om); - if ((origpte & PG_A) != 0) - vm_page_aflag_set(om, PGA_REFERENCED); - CHANGE_PV_LIST_LOCK_TO_PHYS(&lock, opa); - pmap_pvh_free(&om->md, pmap, va); - if ((om->aflags & PGA_WRITEABLE) != 0 && - TAILQ_EMPTY(&om->md.pv_list) && - ((om->flags & PG_FICTITIOUS) != 0 || - TAILQ_EMPTY(&pa_to_pvh(opa)->pv_list))) - vm_page_aflag_clear(om, PGA_WRITEABLE); - } - } else if ((newpte & PG_M) == 0 && (origpte & (PG_M | - PG_RW)) == (PG_M | PG_RW)) { + KASSERT((origpte & PG_FRAME) == pa, + ("pmap_enter: unexpected pa update for %#lx", va)); + if ((newpte & PG_M) == 0 && (origpte & (PG_M | PG_RW)) == + (PG_M | PG_RW)) { if ((origpte & PG_MANAGED) != 0) vm_page_dirty(m);