Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 29 Jun 2013 01:15:19 +0300
From:      Alexander Motin <mav@FreeBSD.org>
To:        Konstantin Belousov <kostikbel@gmail.com>
Cc:        Adrian Chadd <adrian@freebsd.org>, hackers@freebsd.org
Subject:   Re: b_freelist TAILQ/SLIST
Message-ID:  <51CE0AF7.6090906@FreeBSD.org>
In-Reply-To: <20130628065732.GL91021@kib.kiev.ua>
References:  <51CCAE14.6040504@FreeBSD.org> <20130628065732.GL91021@kib.kiev.ua>

next in thread | previous in thread | raw e-mail | index | archive | help
On 28.06.2013 09:57, Konstantin Belousov wrote:
> On Fri, Jun 28, 2013 at 12:26:44AM +0300, Alexander Motin wrote:
>> While doing some profiles of GEOM/CAM IOPS scalability, on some test
>> patterns I've noticed serious congestion with spinning on global
>> pbuf_mtx mutex inside getpbuf() and relpbuf(). Since that code is
>> already very simple, I've tried to optimize probably the only thing
>> possible there: switch bswlist from TAILQ to SLIST. As I can see,
>> b_freelist field of struct buf is really used as TAILQ in some other
>> places, so I've just added another SLIST_ENTRY field. And result
>> appeared to be surprising -- I can no longer reproduce the issue at all.
>> May be it was just unlucky synchronization of specific test, but I've
>> seen in on two different systems and rechecked results with/without
>> patch three times.
> This is too unbelievable.  Could it be, e.g. some cache line conflicts
> which cause the trashing, in fact ?

I think it indeed may be a cache trashing. I've made some profiling for 
getpbuf()/relpbuf() and found interesting results. With patched kernel 
using SLIST profiling shows mostly one point of RESOURCE_STALLS.ANY in 
relpbuf() -- first lock acquisition causes 78% of them. Later memory 
accesses including the lock release are hitting the same cache line and 
almost free. With "clean" kernel using TAILQ I see RESOURCE_STALLS.ANY 
spread almost equally between lock acquisition, bswlist access and lock 
release. It looks like the cache line is constantly erased by something.

My guess was that patch somehow changed cache line sharing. But several 
checks with nm shown that, while memory allocation indeed changed 
slightly, in both cases content of the cache line in question is 
absolutely the same, just shifted in memory by 128 bytes.

I guess the cache line could be trashed by threads doing adaptive 
spinning on lock after collision happened. That trashing increases lock 
hold time and even more increases chance of additional collisions. May 
be switch from TAILQ to SLIST slightly reduces lock hold time, reducing 
chance of cumulative effect. The difference is not big, but in this test 
this global lock acquired 1.5M times per second by 256 threads on 24 
CPUs (12xL2 and 2xL3 caches).

Another guess was that we have some bad case of false cache line 
sharing, but I don't know how that can be either checked or avoided.

At the last moment mostly for luck I've tried to switch pbuf_mtx from 
mtx to mtx_padalign on "clean" kernel. For my surprise that also seems 
fixed the congestion problem, but I can't explain why. 
RESOURCE_STALLS.ANY still show there is cache trashing, but the lock 
spinning has gone.

Any ideas about what is going on there?

-- 
Alexander Motin



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?51CE0AF7.6090906>