From owner-svn-src-all@freebsd.org Thu Aug 9 18:25:50 2018 Return-Path: Delivered-To: svn-src-all@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 2D3FA106F4BC; Thu, 9 Aug 2018 18:25:50 +0000 (UTC) (envelope-from markj@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id D553281D38; Thu, 9 Aug 2018 18:25:49 +0000 (UTC) (envelope-from markj@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 962F11D0AC; Thu, 9 Aug 2018 18:25:49 +0000 (UTC) (envelope-from markj@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id w79IPnED091528; Thu, 9 Aug 2018 18:25:49 GMT (envelope-from markj@FreeBSD.org) Received: (from markj@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id w79IPnLo091527; Thu, 9 Aug 2018 18:25:49 GMT (envelope-from markj@FreeBSD.org) Message-Id: <201808091825.w79IPnLo091527@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: markj set sender to markj@FreeBSD.org using -f From: Mark Johnston Date: Thu, 9 Aug 2018 18:25:49 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r337547 - head/sys/vm X-SVN-Group: head X-SVN-Commit-Author: markj X-SVN-Commit-Paths: head/sys/vm X-SVN-Commit-Revision: 337547 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.27 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Aug 2018 18:25:50 -0000 Author: markj Date: Thu Aug 9 18:25:49 2018 New Revision: 337547 URL: https://svnweb.freebsd.org/changeset/base/337547 Log: Account for the lowmem handlers in the inactive queue scan target. Before r329882 the target would be computed after lowmem handlers run and free pages. On some systems a significant amount of page reclamation happens this way. However, with r329882 the target is computed first, which can lead to unnecessary reclamation from the page cache, and this in turn may result in excessive swapping. Instead, adjust the target after running lowmem handlers. Don't invoke the lowmem handlers before the PID controller, though, since that would hide the true rate of page allocation. Reviewed by: alc, kib (previous version) Sponsored by: The FreeBSD Foundation Differential Revision: https://reviews.freebsd.org/D16606 Modified: head/sys/vm/vm_pageout.c Modified: head/sys/vm/vm_pageout.c ============================================================================== --- head/sys/vm/vm_pageout.c Thu Aug 9 17:53:18 2018 (r337546) +++ head/sys/vm/vm_pageout.c Thu Aug 9 18:25:49 2018 (r337547) @@ -152,7 +152,6 @@ static int vm_pageout_oom_seq = 12; static int vm_pageout_update_period; static int disable_swap_pageouts; static int lowmem_period = 10; -static time_t lowmem_uptime; static int swapdev_enabled; static int vm_panic_on_oom = 0; @@ -1856,12 +1855,17 @@ vm_pageout_oom(int shortage) } } -static void -vm_pageout_lowmem(struct vm_domain *vmd) +static bool +vm_pageout_lowmem(void) { + static int lowmem_ticks = 0; + int last; - if (vmd == VM_DOMAIN(0) && - time_uptime - lowmem_uptime >= lowmem_period) { + last = atomic_load_int(&lowmem_ticks); + while ((u_int)(ticks - last) / hz >= lowmem_period) { + if (atomic_fcmpset_int(&lowmem_ticks, &last, ticks) == 0) + continue; + /* * Decrease registered cache sizes. */ @@ -1873,14 +1877,16 @@ vm_pageout_lowmem(struct vm_domain *vmd) * drained above. */ uma_reclaim(); - lowmem_uptime = time_uptime; + return (true); } + return (false); } static void vm_pageout_worker(void *arg) { struct vm_domain *vmd; + u_int ofree; int addl_shortage, domain, shortage; bool target_met; @@ -1939,11 +1945,16 @@ vm_pageout_worker(void *arg) /* * Use the controller to calculate how many pages to free in - * this interval, and scan the inactive queue. + * this interval, and scan the inactive queue. If the lowmem + * handlers appear to have freed up some pages, subtract the + * difference from the inactive queue scan target. */ shortage = pidctrl_daemon(&vmd->vmd_pid, vmd->vmd_free_count); if (shortage > 0) { - vm_pageout_lowmem(vmd); + ofree = vmd->vmd_free_count; + if (vm_pageout_lowmem() && vmd->vmd_free_count > ofree) + shortage -= min(vmd->vmd_free_count - ofree, + (u_int)shortage); target_met = vm_pageout_scan_inactive(vmd, shortage, &addl_shortage); } else