From owner-svn-src-user@freebsd.org Thu Feb 1 23:47:52 2018 Return-Path: Delivered-To: svn-src-user@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 50D82ED48C9 for ; Thu, 1 Feb 2018 23:47:52 +0000 (UTC) (envelope-from jeff@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 030A76E55C; Thu, 1 Feb 2018 23:47:52 +0000 (UTC) (envelope-from jeff@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id F23A713F8E; Thu, 1 Feb 2018 23:47:51 +0000 (UTC) (envelope-from jeff@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id w11NlpJN056332; Thu, 1 Feb 2018 23:47:51 GMT (envelope-from jeff@FreeBSD.org) Received: (from jeff@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id w11NlpZY056331; Thu, 1 Feb 2018 23:47:51 GMT (envelope-from jeff@FreeBSD.org) Message-Id: <201802012347.w11NlpZY056331@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: jeff set sender to jeff@FreeBSD.org using -f From: Jeff Roberson Date: Thu, 1 Feb 2018 23:47:51 +0000 (UTC) To: src-committers@freebsd.org, svn-src-user@freebsd.org Subject: svn commit: r328759 - user/jeff/numa/sys/vm X-SVN-Group: user X-SVN-Commit-Author: jeff X-SVN-Commit-Paths: user/jeff/numa/sys/vm X-SVN-Commit-Revision: 328759 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-user@freebsd.org X-Mailman-Version: 2.1.25 Precedence: list List-Id: "SVN commit messages for the experimental " user" src tree" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Feb 2018 23:47:52 -0000 Author: jeff Date: Thu Feb 1 23:47:51 2018 New Revision: 328759 URL: https://svnweb.freebsd.org/changeset/base/328759 Log: Implement another variant of per-cpu free page caching derived from markj's patch. Modified: user/jeff/numa/sys/vm/vm_page.c Modified: user/jeff/numa/sys/vm/vm_page.c ============================================================================== --- user/jeff/numa/sys/vm/vm_page.c Thu Feb 1 22:01:53 2018 (r328758) +++ user/jeff/numa/sys/vm/vm_page.c Thu Feb 1 23:47:51 2018 (r328759) @@ -182,6 +182,9 @@ static int vm_page_reclaim_run(int req_class, int doma static void vm_domain_free_wakeup(struct vm_domain *); static int vm_domain_alloc_fail(struct vm_domain *vmd, vm_object_t object, int req); +static int vm_page_import(void *arg, void **store, int cnt, int domain, + int flags); +static void vm_page_release(void *arg, void **store, int cnt); SYSINIT(vm_page, SI_SUB_VM, SI_ORDER_SECOND, vm_page_init, NULL); @@ -195,6 +198,27 @@ vm_page_init(void *dummy) VM_ALLOC_NORMAL | VM_ALLOC_WIRED); } +/* + * The cache page zone is initialized later since we need to be able to allocate + * pages before UMA is fully initialized. + */ +static void +vm_page_init_cache_zones(void *dummy __unused) +{ + struct vm_domain *vmd; + int i; + + for (i = 0; i < vm_ndomains; i++) { + vmd = VM_DOMAIN(i); + vmd->vmd_pgcache = uma_zcache_create("vm pgcache", + sizeof(struct vm_page), NULL, NULL, NULL, NULL, + vm_page_import, vm_page_release, vmd, + /* UMA_ZONE_NOBUCKETCACHE |*/ + UMA_ZONE_MAXBUCKET | UMA_ZONE_VM); + } +} +SYSINIT(vm_page2, SI_SUB_VM_CONF, SI_ORDER_ANY, vm_page_init_cache_zones, NULL); + /* Make sure that u_long is at least 64 bits when PAGE_SIZE is 32K. */ #if PAGE_SIZE == 32768 #ifdef CTASSERT @@ -1709,6 +1733,12 @@ again: } #endif vmd = VM_DOMAIN(domain); + if (object != NULL && !vm_object_reserv(object) && + vmd->vmd_pgcache != NULL) { + m = uma_zalloc(vmd->vmd_pgcache, M_NOWAIT); + if (m != NULL) + goto found; + } vm_domain_free_lock(vmd); if (vm_domain_available(vmd, req, 1)) { /* @@ -1757,9 +1787,7 @@ again: */ if (vm_paging_needed(vmd, free_count)) pagedaemon_wakeup(vmd->vmd_domain); -#if VM_NRESERVLEVEL > 0 found: -#endif vm_page_alloc_check(m); /* @@ -2131,6 +2159,51 @@ again: if (vm_paging_needed(vmd, free_count)) pagedaemon_wakeup(domain); return (m); +} + +static int +vm_page_import(void *arg, void **store, int cnt, int domain, int flags) +{ + struct vm_domain *vmd; + vm_page_t m; + int i; + + vmd = arg; + domain = vmd->vmd_domain; + vm_domain_free_lock(vmd); + for (i = 0; i < cnt; i++) { + m = vm_phys_alloc_pages(domain, VM_FREELIST_DEFAULT, 0); + if (m == NULL) + break; + store[i] = m; + } + if (i != 0) + vm_domain_freecnt_adj(vmd, -i); + vm_domain_free_unlock(vmd); + + return (i); +} + +static void +vm_page_release(void *arg, void **store, int cnt) +{ + struct vm_domain *vmd; + vm_page_t m; + int i; + + vmd = arg; + vm_domain_free_lock(vmd); + for (i = 0; i < cnt; i++) { + m = (vm_page_t)store[i]; +#if VM_NRESERVLEVEL > 0 + KASSERT(vm_reserv_free_page(m) == false, + ("vm_page_release: Cached page belonged to reservation.")); +#endif + vm_phys_free_pages(m, 0); + } + vm_domain_freecnt_adj(vmd, i); + vm_domain_free_wakeup(vmd); + vm_domain_free_unlock(vmd); } #define VPSC_ANY 0 /* No restrictions. */