Date: Fri, 2 Oct 2020 19:16:06 +0000 (UTC) From: Mark Johnston <markj@FreeBSD.org> To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r366380 - head/sys/vm Message-ID: <202010021916.092JG682055418@repo.freebsd.org>
next in thread | raw e-mail | index | archive | help
Author: markj Date: Fri Oct 2 19:16:06 2020 New Revision: 366380 URL: https://svnweb.freebsd.org/changeset/base/366380 Log: vm_pageout: Avoid rounding down the inactive scan target With helper page daemon threads, enabled by default in r364786, we divide the inactive target by the number of threads, rounding down, and sum the total number of pages freed by the threads. This sum is compared with the original target, but by rounding down we might lose pages, causing the page daemon control loop to conclude that inactive queue scanning isn't keeping up with demand for free pages. Typically this results in excessive swapping. Fix the problem by accounting for the error in the main pagedaemon thread's target. Note that by default the problem will manifest only in systems with >16 CPUs in a NUMA domain. Reviewed by: cem Discussed with: dougm Reported and tested by: dhw, glebius Sponsored by: The FreeBSD Foundation Differential Revision: https://reviews.freebsd.org/D26610 Modified: head/sys/vm/vm_pageout.c Modified: head/sys/vm/vm_pageout.c ============================================================================== --- head/sys/vm/vm_pageout.c Fri Oct 2 19:04:29 2020 (r366379) +++ head/sys/vm/vm_pageout.c Fri Oct 2 19:16:06 2020 (r366380) @@ -1649,25 +1649,26 @@ reinsert: /* * Dispatch a number of inactive threads according to load and collect the - * results to prevent a coherent (CEM: incoherent?) view of paging activity on - * this domain. + * results to present a coherent view of paging activity on this domain. */ static int vm_pageout_inactive_dispatch(struct vm_domain *vmd, int shortage) { - u_int freed, pps, threads, us; + u_int freed, pps, slop, threads, us; vmd->vmd_inactive_shortage = shortage; + slop = 0; /* * If we have more work than we can do in a quarter of our interval, we * fire off multiple threads to process it. */ - if (vmd->vmd_inactive_threads > 1 && vmd->vmd_inactive_pps != 0 && + threads = vmd->vmd_inactive_threads; + if (threads > 1 && vmd->vmd_inactive_pps != 0 && shortage > vmd->vmd_inactive_pps / VM_INACT_SCAN_RATE / 4) { - threads = vmd->vmd_inactive_threads; - vm_domain_pageout_lock(vmd); vmd->vmd_inactive_shortage /= threads; + slop = shortage % threads; + vm_domain_pageout_lock(vmd); blockcount_acquire(&vmd->vmd_inactive_starting, threads - 1); blockcount_acquire(&vmd->vmd_inactive_running, threads - 1); wakeup(&vmd->vmd_inactive_shortage); @@ -1675,7 +1676,7 @@ vm_pageout_inactive_dispatch(struct vm_domain *vmd, in } /* Run the local thread scan. */ - vm_pageout_scan_inactive(vmd, vmd->vmd_inactive_shortage); + vm_pageout_scan_inactive(vmd, vmd->vmd_inactive_shortage + slop); /* * Block until helper threads report results and then accumulate
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?202010021916.092JG682055418>