Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 18 Nov 2013 11:59:34 +0200
From:      Alexander Motin <mav@FreeBSD.org>
To:        Luigi Rizzo <rizzo@iet.unipi.it>
Cc:        "freebsd-hackers@freebsd.org" <freebsd-hackers@freebsd.org>, Adrian Chadd <adrian@freebsd.org>, "freebsd-current@freebsd.org" <freebsd-current@freebsd.org>
Subject:   Re: UMA cache back pressure
Message-ID:  <5289E506.2070207@FreeBSD.org>
In-Reply-To: <CA%2BhQ2%2BjoZRJYmPdqi_0G3iRgAd_8rGVGayFT7FfHZ6MS_zziBQ@mail.gmail.com>
References:  <52894C92.60905@FreeBSD.org>	<CAJ-VmokYgfJ1tr-99qCXosBsyTZ698oLZ2oPpkdGODjo8%2BK3LQ@mail.gmail.com>	<5289DBF9.80004@FreeBSD.org> <CA%2BhQ2%2BjoZRJYmPdqi_0G3iRgAd_8rGVGayFT7FfHZ6MS_zziBQ@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On 18.11.2013 11:45, Luigi Rizzo wrote:
>
>
>
> On Mon, Nov 18, 2013 at 10:20 AM, Alexander Motin <mav@freebsd.org
> <mailto:mav@freebsd.org>> wrote:
>
>     On 18.11.2013 10:41, Adrian Chadd wrote:
>
>         Your patch does three things:
>
>         * adds a couple new buckets;
>
>
>     These new buckets make bucket size self-tuning more soft and
>     precise. Without them there are buckets for 1, 5, 13, 29, ... items.
>     While at bigger sizes difference about 2x is fine, at smallest ones
>     it is 5x and 2.6x respectively. New buckets make that line look like
>     1, 3, 5, 9, 13, 29, reducing jumps between steps, making algorithm
>     work softer, allocating and freeing memory in better fitting chunks.
>     Otherwise there is quite a big gap between allocating 128K and
>     5x128K of RAM at once.
>
>
> just curious (and i do not understand whether the "1, 5 ..." are object
> sizes in bytes or what),

Buckets include header (~3 pointers), plus number of item pointers. So 
on amd64 1, 5, 13 mean 32, 64, 128 bytes per bucket. It is not really 
about saving memory on buckets themselves since they are very small, 
comparing to stored items. We could use bigger (like 16 items) bucket 
zone for allocating all smaller ones, overwriting just their items 
limit. But more zones potentially means also lower zone lock congestion 
there, so why not?

> would it make sense to add some instrumentation
> code (a small array of counters i presume) to track the actual number
> of requests for exact object sizes, and perhaps at runtime create buckets
> trying to reduce waste ?

Since 10.0 buckets are also allocated from UMA cache zones, so all 
stats, garbage collection, etc. work by the same rules, which you can 
see in `vmstat -z`.

> Following your reasoning there seems to be still a big gap between
> some of the numbers you quote in the sequence.

Big (2x) gaps between big numbers is less important since once we got 
there it means we have not so much memory pressure and should not be 
hurt by many extra frees. At lower numbers it may be more important.

-- 
Alexander Motin



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?5289E506.2070207>