Date: Tue, 22 Oct 2019 14:20:06 +0000 (UTC) From: Mark Johnston <markj@FreeBSD.org> To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r353886 - head/sys/vm Message-ID: <201910221420.x9MEK60F088425@repo.freebsd.org>
next in thread | raw e-mail | index | archive | help
Author: markj Date: Tue Oct 22 14:20:06 2019 New Revision: 353886 URL: https://svnweb.freebsd.org/changeset/base/353886 Log: Avoid reloading bucket pointers in uma_vm_zone_stats(). The correctness of per-CPU cache accounting in that function is dependent on reading per-CPU pointers exactly once. Ensure that the compiler does not emit multiple loads of those pointers. Reported and tested by: pho Reviewed by: kib MFC after: 1 week Sponsored by: The FreeBSD Foundation Differential Revision: https://reviews.freebsd.org/D22081 Modified: head/sys/vm/uma_core.c Modified: head/sys/vm/uma_core.c ============================================================================== --- head/sys/vm/uma_core.c Tue Oct 22 14:11:22 2019 (r353885) +++ head/sys/vm/uma_core.c Tue Oct 22 14:20:06 2019 (r353886) @@ -4055,6 +4055,7 @@ uma_vm_zone_stats(struct uma_type_header *uth, uma_zon struct uma_percpu_stat *ups, bool internal) { uma_zone_domain_t zdom; + uma_bucket_t bucket; uma_cache_t cache; int i; @@ -4068,28 +4069,29 @@ uma_vm_zone_stats(struct uma_type_header *uth, uma_zon uth->uth_fails = counter_u64_fetch(z->uz_fails); uth->uth_sleeps = z->uz_sleeps; uth->uth_xdomain = z->uz_xdomain; + /* - * While it is not normally safe to access the cache - * bucket pointers while not on the CPU that owns the - * cache, we only allow the pointers to be exchanged - * without the zone lock held, not invalidated, so - * accept the possible race associated with bucket - * exchange during monitoring. + * While it is not normally safe to access the cache bucket pointers + * while not on the CPU that owns the cache, we only allow the pointers + * to be exchanged without the zone lock held, not invalidated, so + * accept the possible race associated with bucket exchange during + * monitoring. Use atomic_load_ptr() to ensure that the bucket pointers + * are loaded only once. */ for (i = 0; i < mp_maxid + 1; i++) { bzero(&ups[i], sizeof(*ups)); if (internal || CPU_ABSENT(i)) continue; cache = &z->uz_cpu[i]; - if (cache->uc_allocbucket != NULL) - ups[i].ups_cache_free += - cache->uc_allocbucket->ub_cnt; - if (cache->uc_freebucket != NULL) - ups[i].ups_cache_free += - cache->uc_freebucket->ub_cnt; - if (cache->uc_crossbucket != NULL) - ups[i].ups_cache_free += - cache->uc_crossbucket->ub_cnt; + bucket = (uma_bucket_t)atomic_load_ptr(&cache->uc_allocbucket); + if (bucket != NULL) + ups[i].ups_cache_free += bucket->ub_cnt; + bucket = (uma_bucket_t)atomic_load_ptr(&cache->uc_freebucket); + if (bucket != NULL) + ups[i].ups_cache_free += bucket->ub_cnt; + bucket = (uma_bucket_t)atomic_load_ptr(&cache->uc_crossbucket); + if (bucket != NULL) + ups[i].ups_cache_free += bucket->ub_cnt; ups[i].ups_allocs = cache->uc_allocs; ups[i].ups_frees = cache->uc_frees; }
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201910221420.x9MEK60F088425>