From owner-freebsd-current@freebsd.org Thu Nov 5 19:16:59 2015 Return-Path: Delivered-To: freebsd-current@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id A4D70A269DD for ; Thu, 5 Nov 2015 19:16:59 +0000 (UTC) (envelope-from jhb@freebsd.org) Received: from bigwig.baldwin.cx (bigwig.baldwin.cx [IPv6:2001:470:1f11:75::1]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 826531648 for ; Thu, 5 Nov 2015 19:16:59 +0000 (UTC) (envelope-from jhb@freebsd.org) Received: from ralph.baldwin.cx (c-73-231-226-104.hsd1.ca.comcast.net [73.231.226.104]) by bigwig.baldwin.cx (Postfix) with ESMTPSA id 9C908B9B9; Thu, 5 Nov 2015 14:16:58 -0500 (EST) From: John Baldwin To: freebsd-current@freebsd.org Cc: Konstantin Belousov , Mateusz Guzik Subject: Re: [PATCH] microoptimize by trying to avoid locking a locked mutex Date: Thu, 05 Nov 2015 11:04:13 -0800 Message-ID: <13871467.CBcqGMncpJ@ralph.baldwin.cx> User-Agent: KMail/4.14.3 (FreeBSD/10.2-STABLE; KDE/4.14.3; amd64; ; ) In-Reply-To: <20151105142628.GJ2257@kib.kiev.ua> References: <20151104233218.GA27709@dft-labs.eu> <20151105142628.GJ2257@kib.kiev.ua> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7 (bigwig.baldwin.cx); Thu, 05 Nov 2015 14:16:58 -0500 (EST) X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 05 Nov 2015 19:16:59 -0000 On Thursday, November 05, 2015 04:26:28 PM Konstantin Belousov wrote: > On Thu, Nov 05, 2015 at 12:32:18AM +0100, Mateusz Guzik wrote: > > mtx_lock will unconditionally try to grab the lock and if that fails, > > will call __mtx_lock_sleep which will immediately try to do the same > > atomic op again. > > > > So, the obvious microoptimization is to check the state in > > __mtx_lock_sleep and avoid the operation if the lock is not free. > > > > This gives me ~40% speedup in a microbenchmark of 40 find processes > > traversing tmpfs and contending on mount mtx (only used as an easy > > benchmark, I have WIP patches to get rid of it). > > > > Second part of the patch is optional and just checks the state of the > > lock prior to doing any atomic operations, but it gives a very modest > > speed up when applied on top of the __mtx_lock_sleep change. As such, > > I'm not going to defend this part. > Shouldn't the same consideration applied to all spinning loops, i.e. > also to the spin/thread mutexes, and to the spinning parts of sx and > lockmgr ? I agree. I think both changes are good and worth doing in our other primitives. -- John Baldwin