Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 18 Nov 2013 11:20:57 +0200
From:      Alexander Motin <mav@FreeBSD.org>
To:        Adrian Chadd <adrian@freebsd.org>
Cc:        "freebsd-hackers@freebsd.org" <freebsd-hackers@freebsd.org>, "freebsd-current@freebsd.org" <freebsd-current@freebsd.org>
Subject:   Re: UMA cache back pressure
Message-ID:  <5289DBF9.80004@FreeBSD.org>
In-Reply-To: <CAJ-VmokYgfJ1tr-99qCXosBsyTZ698oLZ2oPpkdGODjo8%2BK3LQ@mail.gmail.com>
References:  <52894C92.60905@FreeBSD.org> <CAJ-VmokYgfJ1tr-99qCXosBsyTZ698oLZ2oPpkdGODjo8%2BK3LQ@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On 18.11.2013 10:41, Adrian Chadd wrote:
> Your patch does three things:
>
> * adds a couple new buckets;

These new buckets make bucket size self-tuning more soft and precise. 
Without them there are buckets for 1, 5, 13, 29, ... items. While at 
bigger sizes difference about 2x is fine, at smallest ones it is 5x and 
2.6x respectively. New buckets make that line look like 1, 3, 5, 9, 13, 
29, reducing jumps between steps, making algorithm work softer, 
allocating and freeing memory in better fitting chunks. Otherwise there 
is quite a big gap between allocating 128K and 5x128K of RAM at once.

> * reduces some lock contention

More precisely patch adds check for congestion on free to grow bucket 
sizes same as on allocation. As consequence that indeed should reduce 
lock congestion, but I don't have specific numbers. All I see is that VM 
and UMA mutexes no longer appear in profiling top after all these changes.

* does soft back pressure

In this list you have missed mentioning small but major point of the 
patch -- we should prevent problems, not just solve them. As I have 
written in original email, this specific change shown me 1.5x 
performance improvement in low-memory condition. As I understand, that 
happened because VM no longer have to repeatedly allocate and free 
hugely oversized buckets of 10-15 * 128K.

> * does the aggressive backpressure.

After all above that is mostly just a safety belt. With 40GB RAM that 
code was triggered only couple times during full hour of testing with 
debug logging inserted there. On machine with 2GB RAM it is triggered 
quite regularly and probably that is unavoidable since even with lowest 
bucket size of one item 24 CPUs mean 48 cache buckets, i.e. up to 6MB of 
otherwise unreleasable memory for single 128K zone.

> So, do you get any benefits from just the first one, or first two?

I don't see much reason to handle that in pieces. As I have described 
above, each part has own goal, but they much better work together.

> On 17 November 2013 15:09, Alexander Motin <mav@freebsd.org> wrote:
>> Hi.
>>
>> I've created patch, based on earlier work of avg@, to add back pressure to
>> UMA allocation caches. The problem of physical memory or KVA exhaustion
>> existed there for many years and it is quite critical now for improving
>> systems performance while keeping stability. Changes done in memory
>> allocation last years improved situation. but haven't fixed completely. My
>> patch solves remaining problems from two sides: a) reducing bucket sizes
>> every time system detects low memory condition; and b) as last-resort
>> mechanism for very low memory condition, it cycling over all CPUs to purge
>> their per-CPU UMA caches. Benefit of this approach is in absence of any
>> additional hard-coded limits on cache sizes -- they are self-tuned, based on
>> load and memory pressure.
>>
>> With this change I believe it should be safe enough to enable UMA allocation
>> caches in ZFS via vfs.zfs.zio.use_uma tunable (at least for amd64). I did
>> many tests on machine with 24 logical cores (and as result strong allocation
>> cache effects), and can say that with 40GB RAM using UMA caches, allowed by
>> this change, by two times increases results of SPEC NFS benchmark on ZFS
>> pool of several SSDs. To test system stability I've run the same test with
>> physical memory limited to just 2GB and system successfully survived that,
>> and even showed results 1.5 times better then with just last resort measures
>> of b). In both cases tools/umastat no longer shows unbound UMA cache growth,
>> that makes me believe in viability of this approach for longer runs.
>>
>> I would like to hear some comments about that:
>> http://people.freebsd.org/~mav/uma_pressure.patch


-- 
Alexander Motin



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?5289DBF9.80004>