Date: Fri, 22 Nov 2019 16:30:48 +0000 (UTC) From: Mark Johnston <markj@FreeBSD.org> To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r355002 - in head: share/man/man9 sys/vm Message-ID: <201911221630.xAMGUmXc080171@repo.freebsd.org>
next in thread | raw e-mail | index | archive | help
Author: markj Date: Fri Nov 22 16:30:47 2019 New Revision: 355002 URL: https://svnweb.freebsd.org/changeset/base/355002 Log: Revise the page cache size policy. In r353734 the use of the page caches was limited to systems with a relatively large amount of RAM per CPU. This was to mitigate some issues reported with the system not able to keep up with memory pressure in cases where it had been able to do so prior to the addition of the direct free pool cache. This change re-enables those caches. The change modifies uma_zone_set_maxcache(), which was introduced specifically for the page cache zones. Rather than using it to limit only the full bucket cache, have it also set uz_count_max to provide an upper bound on the per-CPU cache size that is consistent with the number of items requested. Remove its return value since it has no use. Enable the page cache zones unconditionally, and limit them to 0.1% of the domain's pages. The limit can be overridden by the vm.pgcache_zone_max tunable as before. Change the item size parameter passed to uma_zcache_create() to the correct size, and stop setting UMA_ZONE_MAXBUCKET. This allows the page cache buckets to be adaptively sized, like the rest of UMA's caches. This also causes the initial bucket size to be small, so only systems which benefit from large caches will get them. Reviewed by: gallatin, jeff MFC after: 2 weeks Sponsored by: The FreeBSD Foundation Differential Revision: https://reviews.freebsd.org/D22393 Modified: head/share/man/man9/zone.9 head/sys/vm/uma.h head/sys/vm/uma_core.c head/sys/vm/vm_glue.c head/sys/vm/vm_page.c Modified: head/share/man/man9/zone.9 ============================================================================== --- head/share/man/man9/zone.9 Fri Nov 22 16:28:52 2019 (r355001) +++ head/share/man/man9/zone.9 Fri Nov 22 16:30:47 2019 (r355002) @@ -25,7 +25,7 @@ .\" .\" $FreeBSD$ .\" -.Dd September 1, 2019 +.Dd November 22, 2019 .Dt UMA 9 .Os .Sh NAME @@ -107,7 +107,7 @@ typedef void (*uma_free)(void *item, vm_size_t size, u .Fn uma_zone_set_freef "uma_zone_t zone" "uma_free freef" .Ft int .Fn uma_zone_set_max "uma_zone_t zone" "int nitems" -.Ft int +.Ft void .Fn uma_zone_set_maxcache "uma_zone_t zone" "int nitems" .Ft int .Fn uma_zone_get_max "uma_zone_t zone" @@ -501,11 +501,8 @@ other CPUs when the limit is hit. .Pp The .Fn uma_zone_set_maxcache -function limits the number of free items which may be cached in the zone, -excluding the per-CPU caches, which are bounded in size. -For example, to implement a -.Ql pure -per-CPU cache, a cache zone may be configured with a maximum cache size of 0. +function limits the number of free items which may be cached in the zone. +This limit applies to both the per-CPU caches and the cache of free buckets. .Pp The .Fn uma_zone_get_max Modified: head/sys/vm/uma.h ============================================================================== --- head/sys/vm/uma.h Fri Nov 22 16:28:52 2019 (r355001) +++ head/sys/vm/uma.h Fri Nov 22 16:30:47 2019 (r355002) @@ -494,7 +494,7 @@ int uma_zone_reserve_kva(uma_zone_t zone, int nitems); * nitems The requested upper limit on the number of items allowed * * Returns: - * int The effective value of nitems after rounding up based on page size + * int The effective value of nitems */ int uma_zone_set_max(uma_zone_t zone, int nitems); @@ -504,11 +504,8 @@ int uma_zone_set_max(uma_zone_t zone, int nitems); * Arguments: * zone The zone to limit * nitems The requested upper limit on the number of items allowed - * - * Returns: - * int The effective value of nitems set */ -int uma_zone_set_maxcache(uma_zone_t zone, int nitems); +void uma_zone_set_maxcache(uma_zone_t zone, int nitems); /* * Obtains the effective limit on the number of items in a zone Modified: head/sys/vm/uma_core.c ============================================================================== --- head/sys/vm/uma_core.c Fri Nov 22 16:28:52 2019 (r355001) +++ head/sys/vm/uma_core.c Fri Nov 22 16:30:47 2019 (r355002) @@ -384,6 +384,29 @@ bucket_zone_lookup(int entries) return (ubz); } +static struct uma_bucket_zone * +bucket_zone_max(uma_zone_t zone, int nitems) +{ + struct uma_bucket_zone *ubz; + int bpcpu; + + bpcpu = 2; +#ifdef UMA_XDOMAIN + if ((zone->uz_flags & UMA_ZONE_NUMA) != 0) + /* Count the cross-domain bucket. */ + bpcpu++; +#endif + + for (ubz = &bucket_zones[0]; ubz->ubz_entries != 0; ubz++) + if (ubz->ubz_entries * bpcpu * mp_ncpus > nitems) + break; + if (ubz == &bucket_zones[0]) + ubz = NULL; + else + ubz--; + return (ubz); +} + static int bucket_select(int size) { @@ -3469,22 +3492,12 @@ int uma_zone_set_max(uma_zone_t zone, int nitems) { struct uma_bucket_zone *ubz; + int count; - /* - * If limit is very low we may need to limit how - * much items are allowed in CPU caches. - */ - ubz = &bucket_zones[0]; - for (; ubz->ubz_entries != 0; ubz++) - if (ubz->ubz_entries * 2 * mp_ncpus > nitems) - break; - if (ubz == &bucket_zones[0]) - nitems = ubz->ubz_entries * 2 * mp_ncpus; - else - ubz--; - ZONE_LOCK(zone); - zone->uz_count_max = zone->uz_count = ubz->ubz_entries; + ubz = bucket_zone_max(zone, nitems); + count = ubz != NULL ? ubz->ubz_entries : 0; + zone->uz_count_max = zone->uz_count = count; if (zone->uz_count_min > zone->uz_count_max) zone->uz_count_min = zone->uz_count_max; zone->uz_max_items = nitems; @@ -3494,15 +3507,30 @@ uma_zone_set_max(uma_zone_t zone, int nitems) } /* See uma.h */ -int +void uma_zone_set_maxcache(uma_zone_t zone, int nitems) { + struct uma_bucket_zone *ubz; + int bpcpu; ZONE_LOCK(zone); + ubz = bucket_zone_max(zone, nitems); + if (ubz != NULL) { + bpcpu = 2; +#ifdef UMA_XDOMAIN + if ((zone->uz_flags & UMA_ZONE_NUMA) != 0) + /* Count the cross-domain bucket. */ + bpcpu++; +#endif + nitems -= ubz->ubz_entries * bpcpu * mp_ncpus; + zone->uz_count_max = ubz->ubz_entries; + } else { + zone->uz_count_max = zone->uz_count = 0; + } + if (zone->uz_count_min > zone->uz_count_max) + zone->uz_count_min = zone->uz_count_max; zone->uz_bkt_max = nitems; ZONE_UNLOCK(zone); - - return (nitems); } /* See uma.h */ Modified: head/sys/vm/vm_glue.c ============================================================================== --- head/sys/vm/vm_glue.c Fri Nov 22 16:28:52 2019 (r355001) +++ head/sys/vm/vm_glue.c Fri Nov 22 16:30:47 2019 (r355002) @@ -80,6 +80,7 @@ __FBSDID("$FreeBSD$"); #include <sys/sched.h> #include <sys/sf_buf.h> #include <sys/shm.h> +#include <sys/smp.h> #include <sys/vmmeter.h> #include <sys/vmem.h> #include <sys/sx.h> @@ -266,7 +267,7 @@ vm_sync_icache(vm_map_t map, vm_offset_t va, vm_offset } static uma_zone_t kstack_cache; -static int kstack_cache_size = 128; +static int kstack_cache_size; static int kstack_domain_iter; static int @@ -277,8 +278,7 @@ sysctl_kstack_cache_size(SYSCTL_HANDLER_ARGS) newsize = kstack_cache_size; error = sysctl_handle_int(oidp, &newsize, 0, req); if (error == 0 && req->newptr && newsize != kstack_cache_size) - kstack_cache_size = - uma_zone_set_maxcache(kstack_cache, newsize); + uma_zone_set_maxcache(kstack_cache, newsize); return (error); } SYSCTL_PROC(_vm, OID_AUTO, kstack_cache_size, CTLTYPE_INT|CTLFLAG_RW, @@ -473,7 +473,8 @@ kstack_cache_init(void *null) kstack_cache = uma_zcache_create("kstack_cache", kstack_pages * PAGE_SIZE, NULL, NULL, NULL, NULL, kstack_import, kstack_release, NULL, - UMA_ZONE_NUMA|UMA_ZONE_MINBUCKET); + UMA_ZONE_NUMA); + kstack_cache_size = imax(128, mp_ncpus * 4); uma_zone_set_maxcache(kstack_cache, kstack_cache_size); } Modified: head/sys/vm/vm_page.c ============================================================================== --- head/sys/vm/vm_page.c Fri Nov 22 16:28:52 2019 (r355001) +++ head/sys/vm/vm_page.c Fri Nov 22 16:30:47 2019 (r355002) @@ -216,30 +216,28 @@ vm_page_init_cache_zones(void *dummy __unused) { struct vm_domain *vmd; struct vm_pgcache *pgcache; - int domain, maxcache, pool; + int cache, domain, maxcache, pool; maxcache = 0; TUNABLE_INT_FETCH("vm.pgcache_zone_max", &maxcache); for (domain = 0; domain < vm_ndomains; domain++) { vmd = VM_DOMAIN(domain); - - /* - * Don't allow the page caches to take up more than .1875% of - * memory. A UMA bucket contains at most 256 free pages, and we - * have two buckets per CPU per free pool. - */ - if (vmd->vmd_page_count / 600 < 2 * 256 * mp_ncpus * - VM_NFREEPOOL) - continue; for (pool = 0; pool < VM_NFREEPOOL; pool++) { pgcache = &vmd->vmd_pgcache[pool]; pgcache->domain = domain; pgcache->pool = pool; pgcache->zone = uma_zcache_create("vm pgcache", - sizeof(struct vm_page), NULL, NULL, NULL, NULL, + PAGE_SIZE, NULL, NULL, NULL, NULL, vm_page_zone_import, vm_page_zone_release, pgcache, - UMA_ZONE_MAXBUCKET | UMA_ZONE_VM); - (void)uma_zone_set_maxcache(pgcache->zone, maxcache); + UMA_ZONE_VM); + + /* + * Limit each pool's zone to 0.1% of the pages in the + * domain. + */ + cache = maxcache != 0 ? maxcache : + vmd->vmd_page_count / 1000; + uma_zone_set_maxcache(pgcache->zone, cache); } } }
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201911221630.xAMGUmXc080171>