From owner-svn-src-all@freebsd.org Fri Nov 22 16:31:31 2019 Return-Path: Delivered-To: svn-src-all@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 74B2D1BBB78; Fri, 22 Nov 2019 16:31:31 +0000 (UTC) (envelope-from markj@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 47KMPv2XkJz3xvf; Fri, 22 Nov 2019 16:31:31 +0000 (UTC) (envelope-from markj@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 21B1126535; Fri, 22 Nov 2019 16:31:31 +0000 (UTC) (envelope-from markj@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id xAMGVVQ7081810; Fri, 22 Nov 2019 16:31:31 GMT (envelope-from markj@FreeBSD.org) Received: (from markj@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id xAMGVVxU081809; Fri, 22 Nov 2019 16:31:31 GMT (envelope-from markj@FreeBSD.org) Message-Id: <201911221631.xAMGVVxU081809@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: markj set sender to markj@FreeBSD.org using -f From: Mark Johnston Date: Fri, 22 Nov 2019 16:31:31 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r355004 - head/sys/vm X-SVN-Group: head X-SVN-Commit-Author: markj X-SVN-Commit-Paths: head/sys/vm X-SVN-Commit-Revision: 355004 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 22 Nov 2019 16:31:31 -0000 Author: markj Date: Fri Nov 22 16:31:30 2019 New Revision: 355004 URL: https://svnweb.freebsd.org/changeset/base/355004 Log: Reclaim memory from UMA if the page daemon is struggling. Use the UMA reclaim thread to asynchronously drain all caches if there is a severe shortage in a domain. Otherwise we only trigger UMA reclamation every 10s even when the system has completely run out of memory. Stop entirely draining the caches when one domain falls below its min threshold. In some workloads it is normal for one NUMA domain to end up being nearly depleted by kernel memory allocations, for example for the ZFS ARC. The domainset iterators skip domains below the vmd_min_free theshold on the first iteration, so we should allow that mechanism to limit further depletion of the domain's free pages before taking the extreme step of calling uma_reclaim(UMA_RECLAIM_DRAIN_CPU). Discussed with: jeff MFC after: 2 weeks Sponsored by: The FreeBSD Foundation Differential Revision: https://reviews.freebsd.org/D22395 Modified: head/sys/vm/vm_pageout.c Modified: head/sys/vm/vm_pageout.c ============================================================================== --- head/sys/vm/vm_pageout.c Fri Nov 22 16:31:10 2019 (r355003) +++ head/sys/vm/vm_pageout.c Fri Nov 22 16:31:30 2019 (r355004) @@ -1965,12 +1965,20 @@ vm_pageout_oom(int shortage) } } +/* + * Signal a free page shortage to subsystems that have registered an event + * handler. Reclaim memory from UMA in the event of a severe shortage. + * Return true if the free page count should be re-evaluated. + */ static bool vm_pageout_lowmem(void) { static int lowmem_ticks = 0; int last; + bool ret; + ret = false; + last = atomic_load_int(&lowmem_ticks); while ((u_int)(ticks - last) / hz >= lowmem_period) { if (atomic_fcmpset_int(&lowmem_ticks, &last, ticks) == 0) @@ -1984,15 +1992,27 @@ vm_pageout_lowmem(void) /* * We do this explicitly after the caches have been - * drained above. If we have a severe page shortage on - * our hands, completely drain all UMA zones. Otherwise, - * just prune the caches. + * drained above. */ - uma_reclaim(vm_page_count_min() ? UMA_RECLAIM_DRAIN_CPU : - UMA_RECLAIM_TRIM); - return (true); + uma_reclaim(UMA_RECLAIM_TRIM); + ret = true; } - return (false); + + /* + * Kick off an asynchronous reclaim of cached memory if one of the + * page daemons is failing to keep up with demand. Use the "severe" + * threshold instead of "min" to ensure that we do not blow away the + * caches if a subset of the NUMA domains are depleted by kernel memory + * allocations; the domainset iterators automatically skip domains + * below the "min" threshold on the first pass. + * + * UMA reclaim worker has its own rate-limiting mechanism, so don't + * worry about kicking it too often. + */ + if (vm_page_count_severe()) + uma_reclaim_wakeup(); + + return (ret); } static void