From owner-svn-src-all@freebsd.org Sun Jun 2 01:00:20 2019 Return-Path: Delivered-To: svn-src-all@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 3057C15A4CC6; Sun, 2 Jun 2019 01:00:20 +0000 (UTC) (envelope-from markj@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id C5CAB85076; Sun, 2 Jun 2019 01:00:19 +0000 (UTC) (envelope-from markj@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id AB94A1FA64; Sun, 2 Jun 2019 01:00:19 +0000 (UTC) (envelope-from markj@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id x5210JdC053785; Sun, 2 Jun 2019 01:00:19 GMT (envelope-from markj@FreeBSD.org) Received: (from markj@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id x5210Iwr053778; Sun, 2 Jun 2019 01:00:18 GMT (envelope-from markj@FreeBSD.org) Message-Id: <201906020100.x5210Iwr053778@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: markj set sender to markj@FreeBSD.org using -f From: Mark Johnston Date: Sun, 2 Jun 2019 01:00:17 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r348502 - head/sys/vm X-SVN-Group: head X-SVN-Commit-Author: markj X-SVN-Commit-Paths: head/sys/vm X-SVN-Commit-Revision: 348502 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: C5CAB85076 X-Spamd-Bar: -- Authentication-Results: mx1.freebsd.org X-Spamd-Result: default: False [-2.96 / 15.00]; local_wl_from(0.00)[FreeBSD.org]; NEURAL_HAM_MEDIUM(-1.00)[-0.999,0]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; NEURAL_HAM_SHORT(-0.96)[-0.961,0]; ASN(0.00)[asn:11403, ipnet:2610:1c1:1::/48, country:US] X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 02 Jun 2019 01:00:20 -0000 Author: markj Date: Sun Jun 2 01:00:17 2019 New Revision: 348502 URL: https://svnweb.freebsd.org/changeset/base/348502 Log: Add a vm_page_wired() predicate. Use it instead of accessing the wire_count field directly. No functional change intended. Reviewed by: alc, kib MFC after: 1 week Sponsored by: Netflix Differential Revision: https://reviews.freebsd.org/D20485 Modified: head/sys/vm/memguard.c head/sys/vm/swap_pager.c head/sys/vm/vm_fault.c head/sys/vm/vm_object.c head/sys/vm/vm_page.c head/sys/vm/vm_page.h head/sys/vm/vm_pageout.c Modified: head/sys/vm/memguard.c ============================================================================== --- head/sys/vm/memguard.c Sun Jun 2 00:08:24 2019 (r348501) +++ head/sys/vm/memguard.c Sun Jun 2 01:00:17 2019 (r348502) @@ -262,7 +262,7 @@ v2sizep(vm_offset_t va) if (pa == 0) panic("MemGuard detected double-free of %p", (void *)va); p = PHYS_TO_VM_PAGE(pa); - KASSERT(p->wire_count != 0 && p->queue == PQ_NONE, + KASSERT(vm_page_wired(p) && p->queue == PQ_NONE, ("MEMGUARD: Expected wired page %p in vtomgfifo!", p)); return (&p->plinks.memguard.p); } @@ -277,7 +277,7 @@ v2sizev(vm_offset_t va) if (pa == 0) panic("MemGuard detected double-free of %p", (void *)va); p = PHYS_TO_VM_PAGE(pa); - KASSERT(p->wire_count != 0 && p->queue == PQ_NONE, + KASSERT(vm_page_wired(p) && p->queue == PQ_NONE, ("MEMGUARD: Expected wired page %p in vtomgfifo!", p)); return (&p->plinks.memguard.v); } Modified: head/sys/vm/swap_pager.c ============================================================================== --- head/sys/vm/swap_pager.c Sun Jun 2 00:08:24 2019 (r348501) +++ head/sys/vm/swap_pager.c Sun Jun 2 01:00:17 2019 (r348502) @@ -1679,7 +1679,7 @@ swp_pager_force_pagein(vm_object_t object, vm_pindex_t vm_page_dirty(m); #ifdef INVARIANTS vm_page_lock(m); - if (m->wire_count == 0 && m->queue == PQ_NONE) + if (!vm_page_wired(m) && m->queue == PQ_NONE) panic("page %p is neither wired nor queued", m); vm_page_unlock(m); #endif Modified: head/sys/vm/vm_fault.c ============================================================================== --- head/sys/vm/vm_fault.c Sun Jun 2 00:08:24 2019 (r348501) +++ head/sys/vm/vm_fault.c Sun Jun 2 01:00:17 2019 (r348502) @@ -1004,7 +1004,7 @@ readrest: */ if (rv == VM_PAGER_ERROR || rv == VM_PAGER_BAD) { vm_page_lock(fs.m); - if (fs.m->wire_count == 0) + if (!vm_page_wired(fs.m)) vm_page_free(fs.m); else vm_page_xunbusy_maybelocked(fs.m); @@ -1027,7 +1027,7 @@ readrest: */ if (fs.object != fs.first_object) { vm_page_lock(fs.m); - if (fs.m->wire_count == 0) + if (!vm_page_wired(fs.m)) vm_page_free(fs.m); else vm_page_xunbusy_maybelocked(fs.m); @@ -1805,7 +1805,7 @@ again: vm_page_wire(dst_m); vm_page_unlock(dst_m); } else { - KASSERT(dst_m->wire_count > 0, + KASSERT(vm_page_wired(dst_m), ("dst_m %p is not wired", dst_m)); } } else { Modified: head/sys/vm/vm_object.c ============================================================================== --- head/sys/vm/vm_object.c Sun Jun 2 00:08:24 2019 (r348501) +++ head/sys/vm/vm_object.c Sun Jun 2 01:00:17 2019 (r348502) @@ -720,7 +720,7 @@ vm_object_terminate_pages(vm_object_t object) */ vm_page_change_lock(p, &mtx); p->object = NULL; - if (p->wire_count != 0) + if (vm_page_wired(p)) continue; VM_CNT_INC(v_pfree); vm_page_free(p); @@ -1595,7 +1595,7 @@ vm_object_collapse_scan(vm_object_t object, int op) vm_page_lock(p); KASSERT(!pmap_page_is_mapped(p), ("freeing mapped page %p", p)); - if (p->wire_count == 0) + if (!vm_page_wired(p)) vm_page_free(p); else vm_page_remove(p); @@ -1639,7 +1639,7 @@ vm_object_collapse_scan(vm_object_t object, int op) vm_page_lock(p); KASSERT(!pmap_page_is_mapped(p), ("freeing mapped page %p", p)); - if (p->wire_count == 0) + if (!vm_page_wired(p)) vm_page_free(p); else vm_page_remove(p); @@ -1944,7 +1944,7 @@ again: VM_OBJECT_WLOCK(object); goto again; } - if (p->wire_count != 0) { + if (vm_page_wired(p)) { if ((options & OBJPR_NOTMAPPED) == 0 && object->ref_count != 0) pmap_remove_all(p); Modified: head/sys/vm/vm_page.c ============================================================================== --- head/sys/vm/vm_page.c Sun Jun 2 00:08:24 2019 (r348501) +++ head/sys/vm/vm_page.c Sun Jun 2 01:00:17 2019 (r348502) @@ -2608,7 +2608,7 @@ retry: error = ENOMEM; goto unlock; } - KASSERT(m_new->wire_count == 0, + KASSERT(!vm_page_wired(m_new), ("page %p is wired", m_new)); /* @@ -3434,7 +3434,7 @@ vm_page_activate(vm_page_t m) vm_page_assert_locked(m); - if (m->wire_count > 0 || (m->oflags & VPO_UNMANAGED) != 0) + if (vm_page_wired(m) || (m->oflags & VPO_UNMANAGED) != 0) return; if (vm_page_queue(m) == PQ_ACTIVE) { if (m->act_count < ACT_INIT) @@ -3509,7 +3509,7 @@ vm_page_free_prep(vm_page_t m) m->valid = 0; vm_page_undirty(m); - if (m->wire_count != 0) + if (vm_page_wired(m) != 0) panic("vm_page_free_prep: freeing wired page %p", m); if (m->hold_count != 0) { m->flags &= ~PG_ZERO; @@ -3610,7 +3610,7 @@ vm_page_wire(vm_page_t m) m)); return; } - if (m->wire_count == 0) { + if (!vm_page_wired(m)) { KASSERT((m->oflags & VPO_UNMANAGED) == 0 || m->queue == PQ_NONE, ("vm_page_wire: unmanaged page %p is queued", m)); @@ -3688,7 +3688,7 @@ vm_page_unwire_noq(vm_page_t m) ("vm_page_unwire: fictitious page %p's wire count isn't one", m)); return (false); } - if (m->wire_count == 0) + if (!vm_page_wired(m)) panic("vm_page_unwire: page %p's wire count is zero", m); m->wire_count--; if (m->wire_count == 0) { @@ -3710,7 +3710,7 @@ vm_page_deactivate(vm_page_t m) vm_page_assert_locked(m); - if (m->wire_count > 0 || (m->oflags & VPO_UNMANAGED) != 0) + if (vm_page_wired(m) || (m->oflags & VPO_UNMANAGED) != 0) return; if (!vm_page_inactive(m)) { @@ -3734,7 +3734,7 @@ vm_page_deactivate_noreuse(vm_page_t m) vm_page_assert_locked(m); - if (m->wire_count > 0 || (m->oflags & VPO_UNMANAGED) != 0) + if (vm_page_wired(m) || (m->oflags & VPO_UNMANAGED) != 0) return; if (!vm_page_inactive(m)) { @@ -3756,7 +3756,7 @@ vm_page_launder(vm_page_t m) { vm_page_assert_locked(m); - if (m->wire_count > 0 || (m->oflags & VPO_UNMANAGED) != 0) + if (vm_page_wired(m) || (m->oflags & VPO_UNMANAGED) != 0) return; if (vm_page_in_laundry(m)) @@ -3777,7 +3777,7 @@ vm_page_unswappable(vm_page_t m) { vm_page_assert_locked(m); - KASSERT(m->wire_count == 0 && (m->oflags & VPO_UNMANAGED) == 0, + KASSERT(!vm_page_wired(m) && (m->oflags & VPO_UNMANAGED) == 0, ("page %p already unswappable", m)); vm_page_dequeue(m); Modified: head/sys/vm/vm_page.h ============================================================================== --- head/sys/vm/vm_page.h Sun Jun 2 00:08:24 2019 (r348501) +++ head/sys/vm/vm_page.h Sun Jun 2 01:00:17 2019 (r348502) @@ -822,5 +822,12 @@ vm_page_held(vm_page_t m) return (m->hold_count > 0 || m->wire_count > 0); } +static inline bool +vm_page_wired(vm_page_t m) +{ + + return (m->wire_count > 0); +} + #endif /* _KERNEL */ #endif /* !_VM_PAGE_ */ Modified: head/sys/vm/vm_pageout.c ============================================================================== --- head/sys/vm/vm_pageout.c Sun Jun 2 00:08:24 2019 (r348501) +++ head/sys/vm/vm_pageout.c Sun Jun 2 01:00:17 2019 (r348502) @@ -754,7 +754,7 @@ recheck: */ if (m->hold_count != 0) continue; - if (m->wire_count != 0) { + if (vm_page_wired(m)) { vm_page_dequeue_deferred(m); continue; } @@ -1203,7 +1203,7 @@ act_scan: /* * Wired pages are dequeued lazily. */ - if (m->wire_count != 0) { + if (vm_page_wired(m)) { vm_page_dequeue_deferred(m); continue; } @@ -1430,7 +1430,7 @@ recheck: addl_page_shortage++; goto reinsert; } - if (m->wire_count != 0) { + if (vm_page_wired(m)) { vm_page_dequeue_deferred(m); continue; }