Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 18 Nov 2013 04:10:19 -0800
From:      Adrian Chadd <adrian@freebsd.org>
To:        Alexander Motin <mav@freebsd.org>
Cc:        "freebsd-hackers@freebsd.org" <freebsd-hackers@freebsd.org>, "freebsd-current@freebsd.org" <freebsd-current@freebsd.org>
Subject:   Re: UMA cache back pressure
Message-ID:  <CAJ-VmomiFBQaNUweOO56rkOYtQOvUdsa1O=2WuYpeKxyTka%2BWA@mail.gmail.com>
In-Reply-To: <5289DBF9.80004@FreeBSD.org>
References:  <52894C92.60905@FreeBSD.org> <CAJ-VmokYgfJ1tr-99qCXosBsyTZ698oLZ2oPpkdGODjo8%2BK3LQ@mail.gmail.com> <5289DBF9.80004@FreeBSD.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On 18 November 2013 01:20, Alexander Motin <mav@freebsd.org> wrote:
> On 18.11.2013 10:41, Adrian Chadd wrote:
>>
>> Your patch does three things:
>>
>> * adds a couple new buckets;
>
>
> These new buckets make bucket size self-tuning more soft and precise.
> Without them there are buckets for 1, 5, 13, 29, ... items. While at bigger
> sizes difference about 2x is fine, at smallest ones it is 5x and 2.6x
> respectively. New buckets make that line look like 1, 3, 5, 9, 13, 29,
> reducing jumps between steps, making algorithm work softer, allocating and
> freeing memory in better fitting chunks. Otherwise there is quite a big gap
> between allocating 128K and 5x128K of RAM at once.

Right. That makes sense, but your initial email didn't say "oh, I'm
adding more buckets." :-)

>
>> * reduces some lock contention
>
>
> More precisely patch adds check for congestion on free to grow bucket sizes
> same as on allocation. As consequence that indeed should reduce lock
> congestion, but I don't have specific numbers. All I see is that VM and UMA
> mutexes no longer appear in profiling top after all these changes.

Sure. But again, you don't say that in your commit message. :)

> * does soft back pressure
>
> In this list you have missed mentioning small but major point of the patch
> -- we should prevent problems, not just solve them. As I have written in
> original email, this specific change shown me 1.5x performance improvement
> in low-memory condition. As I understand, that happened because VM no longer
> have to repeatedly allocate and free hugely oversized buckets of 10-15 *
> 128K.

yup, sorry I missed this. It's a sneaky two lines. :)

>
>> * does the aggressive backpressure.
>
>
> After all above that is mostly just a safety belt. With 40GB RAM that code
> was triggered only couple times during full hour of testing with debug
> logging inserted there. On machine with 2GB RAM it is triggered quite
> regularly and probably that is unavoidable since even with lowest bucket
> size of one item 24 CPUs mean 48 cache buckets, i.e. up to 6MB of otherwise
> unreleasable memory for single 128K zone.
>
>
>> So, do you get any benefits from just the first one, or first two?
>
>
> I don't see much reason to handle that in pieces. As I have described above,
> each part has own goal, but they much better work together.

Well, with changes like this, having them broken up and committed in
small pieces make it easier for people to do regression testing with.

If you introduce some regression in a particular workload then the
user or developer is only going to find that it's this patch and won't
necessarily know how to break it down into pieces to see which piece
actually introduced the regression in their specific workload.

I totally agree that this should be done! It just does seem to be
something that could be committed in smaller pieces quite easily so to
make potential debugging later on down the road much easier. Each
commit builds on the previous commit.

So, something like (in order):

* add two new buckets, here's why
* fix locking, here's why
* soft back pressure
* aggressive backpressure

Did you get profiling traces from the VM free paths? Is it because
it's churning the physical pages through the VM physical allocator?
or?



-adrian



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAJ-VmomiFBQaNUweOO56rkOYtQOvUdsa1O=2WuYpeKxyTka%2BWA>