From owner-freebsd-hackers@FreeBSD.ORG Sat Jun 29 07:06:18 2013 Return-Path: Delivered-To: hackers@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 12E6A6E7; Sat, 29 Jun 2013 07:06:18 +0000 (UTC) (envelope-from mavbsd@gmail.com) Received: from mail-we0-x229.google.com (mail-we0-x229.google.com [IPv6:2a00:1450:400c:c03::229]) by mx1.freebsd.org (Postfix) with ESMTP id 7B35D1BD6; Sat, 29 Jun 2013 07:06:17 +0000 (UTC) Received: by mail-we0-f169.google.com with SMTP id n57so1849433wev.0 for ; Sat, 29 Jun 2013 00:06:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type:content-transfer-encoding; bh=FHVfNA8s1DjfH2tgZEkDU8BPxUXFWYSzK5kTbnNc5Eg=; b=Gg11tmucpSPV+wicOO/OUMqPY57s98cq/uFfDbUhoNd3wfHC4hvD27ty60/8HAEzI6 mutH4HHfb3tctjGyBFLe+GcUvkI0umNEtUuYal3rFFQBB59vUQDJVD2cewB6mNTBPKa9 QLcGYVSmiKzOILxg/o9qBoIk5SrlF9PPC3GKn9jcLL8aZqV90qfhcca9gEFlAzKklPJf AM3Pt+oixRUHCzGImlAS5JBI4p03T/PGfpFAuoQTcy87ASVagncuIb5kHs/HzESlzJKm yJ2GqbwZ0SCiiB7pYH4TvnsEHaN3z3cZtaI5FJv+D7mcDi78y8wclLj7oFmyCU0ttkze GgbQ== X-Received: by 10.180.38.37 with SMTP id d5mr5081863wik.37.1372489576478; Sat, 29 Jun 2013 00:06:16 -0700 (PDT) Received: from mavbook.mavhome.dp.ua (mavhome.mavhome.dp.ua. [213.227.240.37]) by mx.google.com with ESMTPSA id c44sm15269938eeb.8.2013.06.29.00.06.14 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sat, 29 Jun 2013 00:06:15 -0700 (PDT) Sender: Alexander Motin Message-ID: <51CE8763.2090406@FreeBSD.org> Date: Sat, 29 Jun 2013 10:06:11 +0300 From: Alexander Motin User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:17.0) Gecko/20130616 Thunderbird/17.0.6 MIME-Version: 1.0 To: Konstantin Belousov Subject: Re: b_freelist TAILQ/SLIST References: <51CCAE14.6040504@FreeBSD.org> <20130628065732.GL91021@kib.kiev.ua> <51CE0AF7.6090906@FreeBSD.org> <20130629023532.GW91021@kib.kiev.ua> In-Reply-To: <20130629023532.GW91021@kib.kiev.ua> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: Adrian Chadd , hackers@freebsd.org X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 29 Jun 2013 07:06:18 -0000 On 29.06.2013 05:35, Konstantin Belousov wrote: > On Sat, Jun 29, 2013 at 01:15:19AM +0300, Alexander Motin wrote: >> On 28.06.2013 09:57, Konstantin Belousov wrote: >>> On Fri, Jun 28, 2013 at 12:26:44AM +0300, Alexander Motin wrote: >>>> While doing some profiles of GEOM/CAM IOPS scalability, on some test >>>> patterns I've noticed serious congestion with spinning on global >>>> pbuf_mtx mutex inside getpbuf() and relpbuf(). Since that code is >>>> already very simple, I've tried to optimize probably the only thing >>>> possible there: switch bswlist from TAILQ to SLIST. As I can see, >>>> b_freelist field of struct buf is really used as TAILQ in some other >>>> places, so I've just added another SLIST_ENTRY field. And result >>>> appeared to be surprising -- I can no longer reproduce the issue at all. >>>> May be it was just unlucky synchronization of specific test, but I've >>>> seen in on two different systems and rechecked results with/without >>>> patch three times. >>> This is too unbelievable. Could it be, e.g. some cache line conflicts >>> which cause the trashing, in fact ? >> >> I think it indeed may be a cache trashing. I've made some profiling for >> getpbuf()/relpbuf() and found interesting results. With patched kernel >> using SLIST profiling shows mostly one point of RESOURCE_STALLS.ANY in >> relpbuf() -- first lock acquisition causes 78% of them. Later memory >> accesses including the lock release are hitting the same cache line and >> almost free. With "clean" kernel using TAILQ I see RESOURCE_STALLS.ANY >> spread almost equally between lock acquisition, bswlist access and lock >> release. It looks like the cache line is constantly erased by something. >> >> My guess was that patch somehow changed cache line sharing. But several >> checks with nm shown that, while memory allocation indeed changed >> slightly, in both cases content of the cache line in question is >> absolutely the same, just shifted in memory by 128 bytes. >> >> I guess the cache line could be trashed by threads doing adaptive >> spinning on lock after collision happened. That trashing increases lock >> hold time and even more increases chance of additional collisions. May >> be switch from TAILQ to SLIST slightly reduces lock hold time, reducing >> chance of cumulative effect. The difference is not big, but in this test >> this global lock acquired 1.5M times per second by 256 threads on 24 >> CPUs (12xL2 and 2xL3 caches). >> >> Another guess was that we have some bad case of false cache line >> sharing, but I don't know how that can be either checked or avoided. >> >> At the last moment mostly for luck I've tried to switch pbuf_mtx from >> mtx to mtx_padalign on "clean" kernel. For my surprise that also seems >> fixed the congestion problem, but I can't explain why. >> RESOURCE_STALLS.ANY still show there is cache trashing, but the lock >> spinning has gone. >> >> Any ideas about what is going on there? > > FWIW, Jeff just changed pbuf_mtx allocation to use padalign, it is a > somewhat unrelated change in r252330. Heh! It was unexpected. I've seen that commit, but haven't look that deep. I'll pick it up and guess case will evaporate. > Are pbuf_mtx and bswlist are located next to next in your kernel ? Yes, as I've tried to say above they are on the same cache line. > If yes, then I would expect that the explanation is how the MESI > protocol and atomics work. When performing the locked op, CPU takes the > whole cache line into the exclusive ownership. Since our locks try the > cmpset as the first operation, and then 'adaptive' loop interleaving > cmpset and check for the ownership, false cache line sharing between > pbuf_mtx and bswlist should result exactly in such effects. Different > cores should bounce the ownership of the cache line, slowing down > the accesses. I understand that lock attempt will steal cache line from lock owner. What I don't very understand is why avoiding it helps performance in this case. Indeed, having mutex on own cache line will not let other cores to steal also bswlist, but it also means that bswlist should be prefetched separately (and profiling shows resource stalls there). Or in this case separate speculative prefetch will be better then forced one which could be stolen? Is there cases when it is not, or the only reason to not pad all global mutexes is only saving memory? -- Alexander Motin