From owner-svn-src-head@freebsd.org Fri Oct 13 21:54:35 2017 Return-Path: Delivered-To: svn-src-head@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 8584FE2DF3B; Fri, 13 Oct 2017 21:54:35 +0000 (UTC) (envelope-from mjg@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 5FCDC76453; Fri, 13 Oct 2017 21:54:35 +0000 (UTC) (envelope-from mjg@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id v9DLsYrl026417; Fri, 13 Oct 2017 21:54:34 GMT (envelope-from mjg@FreeBSD.org) Received: (from mjg@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id v9DLsYeC026414; Fri, 13 Oct 2017 21:54:34 GMT (envelope-from mjg@FreeBSD.org) Message-Id: <201710132154.v9DLsYeC026414@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: mjg set sender to mjg@FreeBSD.org using -f From: Mateusz Guzik Date: Fri, 13 Oct 2017 21:54:34 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r324610 - in head/sys: sys vm X-SVN-Group: head X-SVN-Commit-Author: mjg X-SVN-Commit-Paths: in head/sys: sys vm X-SVN-Commit-Revision: 324610 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-head@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: SVN commit messages for the src tree for head/-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 13 Oct 2017 21:54:35 -0000 Author: mjg Date: Fri Oct 13 21:54:34 2017 New Revision: 324610 URL: https://svnweb.freebsd.org/changeset/base/324610 Log: Reduce traffic on vm_cnt.v_free_count The variable is modified with the highly contended page free queue lock. It unnecessarily shares a cacheline with purely read-only fields and is re-read after the lock is dropped in the page allocation code making the hold time longer. Pad the variable just like the others and store the value as found with the lock held instead of re-reading. Provides a modest 1%-ish speed up in concurrent page faults. Reviewed by: kib, markj Differential Revision: https://reviews.freebsd.org/D12665 Modified: head/sys/sys/vmmeter.h head/sys/vm/vm_page.c head/sys/vm/vm_phys.h Modified: head/sys/sys/vmmeter.h ============================================================================== --- head/sys/sys/vmmeter.h Fri Oct 13 20:31:56 2017 (r324609) +++ head/sys/sys/vmmeter.h Fri Oct 13 21:54:34 2017 (r324610) @@ -131,7 +131,6 @@ struct vmmeter { u_int v_free_reserved; /* (c) pages reserved for deadlock */ u_int v_free_target; /* (c) pages desired free */ u_int v_free_min; /* (c) pages desired free */ - u_int v_free_count; /* (f) pages free */ u_int v_inactive_target; /* (c) pages desired inactive */ u_int v_pageout_free_min; /* (c) min pages reserved for kernel */ u_int v_interrupt_free_min; /* (c) reserved pages for int code */ @@ -141,6 +140,7 @@ struct vmmeter { u_int v_inactive_count VMMETER_ALIGNED; /* (a) pages inactive */ u_int v_laundry_count VMMETER_ALIGNED; /* (a) pages eligible for laundering */ + u_int v_free_count VMMETER_ALIGNED; /* (a) pages free */ }; #endif /* _KERNEL || _WANT_VMMETER */ @@ -208,10 +208,10 @@ vm_paging_target(void) * Returns TRUE if the pagedaemon needs to be woken up. */ static inline int -vm_paging_needed(void) +vm_paging_needed(u_int free_count) { - return (vm_cnt.v_free_count < vm_pageout_wakeup_thresh); + return (free_count < vm_pageout_wakeup_thresh); } /* Modified: head/sys/vm/vm_page.c ============================================================================== --- head/sys/vm/vm_page.c Fri Oct 13 20:31:56 2017 (r324609) +++ head/sys/vm/vm_page.c Fri Oct 13 21:54:34 2017 (r324610) @@ -1588,6 +1588,7 @@ vm_page_alloc_after(vm_object_t object, vm_pindex_t pi { vm_page_t m; int flags, req_class; + u_int free_count; KASSERT((object != NULL) == ((req & VM_ALLOC_NOOBJ) == 0) && (object != NULL || (req & VM_ALLOC_SBUSY) == 0) && @@ -1655,7 +1656,7 @@ vm_page_alloc_after(vm_object_t object, vm_pindex_t pi * At this point we had better have found a good page. */ KASSERT(m != NULL, ("missing page")); - vm_phys_freecnt_adj(m, -1); + free_count = vm_phys_freecnt_adj(m, -1); mtx_unlock(&vm_page_queue_free_mtx); vm_page_alloc_check(m); @@ -1713,7 +1714,7 @@ vm_page_alloc_after(vm_object_t object, vm_pindex_t pi * Don't wakeup too often - wakeup the pageout daemon when * we would be nearly out of memory. */ - if (vm_paging_needed()) + if (vm_paging_needed(free_count)) pagedaemon_wakeup(); return (m); @@ -1899,7 +1900,7 @@ retry: pmap_page_set_memattr(m, memattr); pindex++; } - if (vm_paging_needed()) + if (vm_paging_needed(vm_cnt.v_free_count)) pagedaemon_wakeup(); return (m_ret); } @@ -1948,7 +1949,7 @@ vm_page_t vm_page_alloc_freelist(int flind, int req) { vm_page_t m; - u_int flags; + u_int flags, free_count; int req_class; req_class = req & VM_ALLOC_CLASS_MASK; @@ -1980,7 +1981,7 @@ vm_page_alloc_freelist(int flind, int req) mtx_unlock(&vm_page_queue_free_mtx); return (NULL); } - vm_phys_freecnt_adj(m, -1); + free_count = vm_phys_freecnt_adj(m, -1); mtx_unlock(&vm_page_queue_free_mtx); vm_page_alloc_check(m); @@ -2002,7 +2003,7 @@ vm_page_alloc_freelist(int flind, int req) } /* Unmanaged pages don't use "act_count". */ m->oflags = VPO_UNMANAGED; - if (vm_paging_needed()) + if (vm_paging_needed(free_count)) pagedaemon_wakeup(); return (m); } Modified: head/sys/vm/vm_phys.h ============================================================================== --- head/sys/vm/vm_phys.h Fri Oct 13 20:31:56 2017 (r324609) +++ head/sys/vm/vm_phys.h Fri Oct 13 21:54:34 2017 (r324610) @@ -112,13 +112,13 @@ vm_phys_domain(vm_page_t m) #endif } -static inline void +static inline u_int vm_phys_freecnt_adj(vm_page_t m, int adj) { mtx_assert(&vm_page_queue_free_mtx, MA_OWNED); - vm_cnt.v_free_count += adj; vm_phys_domain(m)->vmd_free_count += adj; + return (vm_cnt.v_free_count += adj); } #endif /* _KERNEL */