From owner-svn-src-head@freebsd.org Tue Sep 3 16:14:37 2019 Return-Path: Delivered-To: svn-src-head@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 05353E11FF; Tue, 3 Sep 2019 16:14:37 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from zxy.spb.ru (zxy.spb.ru [195.70.199.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 46NBqJ6Bwyz3JQf; Tue, 3 Sep 2019 16:14:36 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from slw by zxy.spb.ru with local (Exim 4.86 (FreeBSD)) (envelope-from ) id 1i5BRr-000BwQ-V8; Tue, 03 Sep 2019 19:14:27 +0300 Date: Tue, 3 Sep 2019 19:14:27 +0300 From: Slawa Olhovchenkov To: Andriy Gapon Cc: Mark Johnston , src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: Re: svn commit: r351673 - in head: lib/libmemstat share/man/man9 sys/cddl/compat/opensolaris/kern sys/kern sys/vm Message-ID: <20190903161427.GA38096@zxy.spb.ru> References: <201909012222.x81MMh0F022462@repo.freebsd.org> <79c74018-1329-ee69-3480-e2f99821fa93@FreeBSD.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <79c74018-1329-ee69-3480-e2f99821fa93@FreeBSD.org> User-Agent: Mutt/1.5.24 (2015-08-30) X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: slw@zxy.spb.ru X-SA-Exim-Scanned: No (on zxy.spb.ru); SAEximRunCond expanded to false X-Rspamd-Queue-Id: 46NBqJ6Bwyz3JQf X-Spamd-Bar: ------ Authentication-Results: mx1.freebsd.org; none X-Spamd-Result: default: False [-6.93 / 15.00]; NEURAL_HAM_MEDIUM(-1.00)[-0.997,0]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; REPLY(-4.00)[]; NEURAL_HAM_SHORT(-0.93)[-0.934,0] X-BeenThere: svn-src-head@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: SVN commit messages for the src tree for head/-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 03 Sep 2019 16:14:37 -0000 On Tue, Sep 03, 2019 at 10:02:59AM +0300, Andriy Gapon wrote: > On 02/09/2019 01:22, Mark Johnston wrote: > > Author: markj > > Date: Sun Sep 1 22:22:43 2019 > > New Revision: 351673 > > URL: https://svnweb.freebsd.org/changeset/base/351673 > > > > Log: > > Extend uma_reclaim() to permit different reclamation targets. > > > > The page daemon periodically invokes uma_reclaim() to reclaim cached > > items from each zone when the system is under memory pressure. This > > is important since the size of these caches is unbounded by default. > > However it also results in bursts of high latency when allocating from > > heavily used zones as threads miss in the per-CPU caches and must > > access the keg in order to allocate new items. > > > > With r340405 we maintain an estimate of each zone's usage of its > > (per-NUMA domain) cache of full buckets. Start making use of this > > estimate to avoid reclaiming the entire cache when under memory > > pressure. In particular, introduce TRIM, DRAIN and DRAIN_CPU > > verbs for uma_reclaim() and uma_zone_reclaim(). When trimming, only > > items in excess of the estimate are reclaimed. Draining a zone > > reclaims all of the cached full buckets (the previous behaviour of > > uma_reclaim()), and may further drain the per-CPU caches in extreme > > cases. > > > > Now, when under memory pressure, the page daemon will trim zones > > rather than draining them. As a result, heavily used zones do not incur > > bursts of bucket cache misses following reclamation, but large, unused > > caches will be reclaimed as before. > > Mark, > > have you considered running UMA_RECLAIM_TRIM periodically, even without a memory > pressure? > I think that with such a periodic trimming there will be less need to invoke > vm_lowmem(). > > Also, I think that we would be able to retire (or re-purpose) lowmem_period. > E.g., the trimming would be done every lowmem_period, but vm_lowmem() would not > be throttled. > > One example of the throttling of vm_lowmem being bad is its interaction with the > ZFS ARC. When there is a spike in memory usage we want the ARC to adapt as > quickly as possible. But at present the lowmem_period logic interferes with that. Some time ago, I sent Mark a patch that implements this logic, specialy for ARC and mbuf cooperate. Mostly problem I am see at this work -- very slowly vm_page_free(). May be currenly this is more speedy...