From owner-freebsd-hackers@FreeBSD.ORG Tue Nov 19 04:02:59 2013 Return-Path: Delivered-To: freebsd-hackers@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 3300CEBE for ; Tue, 19 Nov 2013 04:02:59 +0000 (UTC) Received: from mail-pa0-f52.google.com (mail-pa0-f52.google.com [209.85.220.52]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 0B8DA20D1 for ; Tue, 19 Nov 2013 04:02:58 +0000 (UTC) Received: by mail-pa0-f52.google.com with SMTP id ld10so3134800pab.25 for ; Mon, 18 Nov 2013 20:02:58 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version:content-type; bh=HMXfPC5MgeeZiW4Xl2FDvu4fohfBX8Z/xq8+r8X8A1g=; b=lFy8rjEZVEkfDcqPds/O5rZmo9LAwh/tSXb4pKJGMBDC1cBSpGY9MfUx/yb0reBfjt 0zVmBqHe2ZCe9yfG1UvG749/EZSYWA0m43aMOK5zEN7hytcjpyW1b93bUMCsVN5iKGDO 8rk2QrBqow/Unvxs8wF4OiyH9k4k/rmj98Y1ZW4cMmM6526WCQlDFIZNIaqBu6i3owJl Z/Rospl+ZR8EeAx7dJaba1b2usx7/LnuEmgu6t55hGJMzMwoMLZn7WyKoSkBaxYt2jzR XQDuuQvpy/xkjapCYg5sbC14zToUIZLHr5s77RFu2hwtwCSXYJ1/8DjC6lIQN8FnriXx 6+Sw== X-Gm-Message-State: ALoCoQllvG6lUCJKhp0sNGWJ4SAWnSgG+WvvpBAAoqzY1HIlcexTqvCsHVkQ0v0pYi7tU5cUXAGT X-Received: by 10.68.218.3 with SMTP id pc3mr16807146pbc.71.1384833293474; Mon, 18 Nov 2013 19:54:53 -0800 (PST) Received: from rrcs-66-91-135-210.west.biz.rr.com (rrcs-66-91-135-210.west.biz.rr.com. [66.91.135.210]) by mx.google.com with ESMTPSA id gg10sm26972304pbc.46.2013.11.18.19.54.51 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Mon, 18 Nov 2013 19:54:52 -0800 (PST) Date: Mon, 18 Nov 2013 17:50:54 -1000 (HST) From: Jeff Roberson X-X-Sender: jroberson@desktop To: Alexander Motin Subject: Re: UMA cache back pressure In-Reply-To: <528A70A2.4010308@FreeBSD.org> Message-ID: References: <52894C92.60905@FreeBSD.org> <528A70A2.4010308@FreeBSD.org> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Mailman-Approved-At: Tue, 19 Nov 2013 04:10:54 +0000 Cc: "freebsd-hackers@freebsd.org" , "freebsd-current@freebsd.org" X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Nov 2013 04:02:59 -0000 On Mon, 18 Nov 2013, Alexander Motin wrote: > On 18.11.2013 21:11, Jeff Roberson wrote: >> On Mon, 18 Nov 2013, Alexander Motin wrote: >>> I've created patch, based on earlier work of avg@, to add back >>> pressure to UMA allocation caches. The problem of physical memory or >>> KVA exhaustion existed there for many years and it is quite critical >>> now for improving systems performance while keeping stability. Changes >>> done in memory allocation last years improved situation. but haven't >>> fixed completely. My patch solves remaining problems from two sides: >>> a) reducing bucket sizes every time system detects low memory >>> condition; and b) as last-resort mechanism for very low memory >>> condition, it cycling over all CPUs to purge their per-CPU UMA caches. >>> Benefit of this approach is in absence of any additional hard-coded >>> limits on cache sizes -- they are self-tuned, based on load and memory >>> pressure. >>> >>> With this change I believe it should be safe enough to enable UMA >>> allocation caches in ZFS via vfs.zfs.zio.use_uma tunable (at least for >>> amd64). I did many tests on machine with 24 logical cores (and as >>> result strong allocation cache effects), and can say that with 40GB >>> RAM using UMA caches, allowed by this change, by two times increases >>> results of SPEC NFS benchmark on ZFS pool of several SSDs. To test >>> system stability I've run the same test with physical memory limited >>> to just 2GB and system successfully survived that, and even showed >>> results 1.5 times better then with just last resort measures of b). In >>> both cases tools/umastat no longer shows unbound UMA cache growth, >>> that makes me believe in viability of this approach for longer runs. >>> >>> I would like to hear some comments about that: >>> http://people.freebsd.org/~mav/uma_pressure.patch >> >> Hey Mav, >> >> This is a great start and great results. I think it could probably even >> go in as-is, but I have a few suggestions. > > Hey! Thanks for your review. I appreciate. And I appreciate more people being interested in working on the allocator. > >> First, let's test this with something that is really super allocator >> heavy and doesn't benefit much from bucket sizing. For example, a >> network forwarding test. Or maybe you could get someone like Netflix >> that is using it to push a lot of bits with less filesystem cost than >> zfs and spec. > > I am not sure what simple forwarding may show in this case. Even on my > workload with ZFS creating strong memory pressure I still have mbuf* zones > buckets almost (some totally) maxed out. Without other major (or even any) > pressure in system they just can't become bigger then maximum. But if you can > propose some interesting test case with pressure that I can reproduce -- I am > all ears. I think part of that is also because you're using min free pages right now as your threshold. It should probably be triggering slightly more often. > >> Second, the cpu binding is a very costly and very high-latency >> operation. It would make sense to do CPU_FOREACH and then ZONE_FOREACH. >> You're also biasing the first zones in the list. The low memory >> condition will more often clear after you check these first zones. So >> you might just check it once and equally penalize all zones. I'm >> concerned that doing CPU_FOREACH in every zone will slow the pagedaemon >> more. > > I completely agree with all you said here. This part of code I just took > as-is from earlier work. It definitely can be improved. I'll take a look on > that. But as I have mentioned in one of earlier responses that code used in > _very_ rare cases, unless system is heavily overloaded on memory, like doing > ZFS on box with 24 cores and 2GB RAM. During reasonable operation it is > enough to have soft back pressure to keep on caches in shape and never call > that. > >> We also have been working towards per-domain pagedaemons so >> perhaps we should have a uma-reclaim taskqueue that we wake up to do the >> work? > > VM is not my area so far, so please propose "the right way". I took this task > now only because I have to due to huge performance bottleneck this problem > causes and years it remains unsolved. Well it's probably fine to keep abusing the first domain's pageout daemon for now but we won't want to in the future, especially if we want to keep each domain's page daemon on the socket that it's managing. > >> Third, using vm_page_count_min() will only trigger when the pageout >> daemon can't keep up with the free target. Typically this should only >> happen with a lot of dirty mmap'd pages or incredibly high system load >> coupled with frequent allocations. So there may be many cases where >> reclaiming the extra UMA memory is helpful but the pagedaemon can still >> keep up while pushing out file pages that we'd prefer to keep. > > As I have told that is indeed last resort. It does not need to be done often. > Per-CPU caches just should not grow without real need to the point when they > have to be cleaned. Let me explain it differently. Right now you're handling cases of overloaded CPU, if we run this code under different conditions we could handle overloaded memory better as well. Imagine a system which has oversized buckets and lots of wasted memory but a pageout daemon which is still meeting targets by evicting page cache pages. Perhaps there was a temporary use of some very large zones which is no longer necessary. Since we meet the paging target quickly enough we will never discover this other memory that we can evict. Look at the vm page targets. The target is very far from the min. So typically the thread just wakes up and evicts clean pages very quickly to accommodate this. ZFS is particularly affected because its pages can't be evicted by the page daemon, so you're more likely to run out, but other systems would benefit from this and they do have pages which could be evicted where you'd like to preserve them by trimming the uma cache. Does that make sense? > >> I think the perfect heuristic would have some idea of how likely the UMA >> pages are to be re-used immediately so we can more effectively tradeoff >> between file pages and kernel memory cache. As it is now we limit the >> uma_reclaim() calls to every 10 seconds when there is memory pressure. >> Perhaps we could keep a timestamp for when the last slab was allocated >> to a zone and do the more expensive reclaim on zones who have timestamps >> that exceed some threshold? Then have a lower threshold for reclaiming >> at all? Again, it doesn't need to be perfect, but I believe we can catch >> a wider set of cases by carefully scheduling this. > > I was thinking about that too. But I think timestamps should be set not on > slab, but on bucket. The fact that zone is not allocating new slabs does not > mean it does not use its already allocated buckets. If we put time of the > last refill into each bucket, then we should be able to purge all buckets, > unused for specified period of time. Additionally we could put timestamp on > zone and update it every time zone runs out of its cache. If zone does not > run out of cache for some time -- probably it has unused buckets. So when we > need some RAM we should take a first look on zones that had stale timestamp. Many healthy flow control algorithms maintain a relatively steady state by periodically testing the edges. I would prefer to maintain the timestamp on a per-zone basis and not per-bucket anyway as it saves some space and we'd have to resize all the buckets if we take up another pointers space. Anyway, I'm not too dogmatic about it. There are probably several convenient ways to write it and no perfect one. May I suggest that you make the change to only FOREACH_CPU once and then commit with your current heuristic. Then we can try to take it one step further? Thanks, Jeff > > -- > Alexander Motin >