From owner-freebsd-current@freebsd.org Thu Nov 5 23:35:24 2015 Return-Path: Delivered-To: freebsd-current@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 3AE17A27A2F for ; Thu, 5 Nov 2015 23:35:24 +0000 (UTC) (envelope-from ian@freebsd.org) Received: from outbound1b.ore.mailhop.org (outbound1b.ore.mailhop.org [54.200.247.200]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 1096F160E for ; Thu, 5 Nov 2015 23:35:23 +0000 (UTC) (envelope-from ian@freebsd.org) Received: from ilsoft.org (unknown [73.34.117.227]) by outbound1.ore.mailhop.org (Halon Mail Gateway) with ESMTPSA; Thu, 5 Nov 2015 23:35:52 +0000 (UTC) Received: from rev (rev [172.22.42.240]) by ilsoft.org (8.14.9/8.14.9) with ESMTP id tA5NZMPi049576; Thu, 5 Nov 2015 16:35:22 -0700 (MST) (envelope-from ian@freebsd.org) Message-ID: <1446766522.91534.412.camel@freebsd.org> Subject: Re: [PATCH] microoptimize by trying to avoid locking a locked mutex From: Ian Lepore To: John Baldwin , Adrian Chadd Cc: Mateusz Guzik , freebsd-current , Konstantin Belousov Date: Thu, 05 Nov 2015 16:35:22 -0700 In-Reply-To: <1563180.x0Z3Ou4xid@ralph.baldwin.cx> References: <20151104233218.GA27709@dft-labs.eu> <20151105192623.GB27709@dft-labs.eu> <1563180.x0Z3Ou4xid@ralph.baldwin.cx> Content-Type: text/plain; charset="us-ascii" X-Mailer: Evolution 3.16.5 FreeBSD GNOME Team Port Mime-Version: 1.0 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 05 Nov 2015 23:35:24 -0000 On Thu, 2015-11-05 at 14:19 -0800, John Baldwin wrote: > On Thursday, November 05, 2015 01:45:19 PM Adrian Chadd wrote: > > On 5 November 2015 at 11:26, Mateusz Guzik > > wrote: > > > On Thu, Nov 05, 2015 at 11:04:13AM -0800, John Baldwin wrote: > > > > On Thursday, November 05, 2015 04:26:28 PM Konstantin Belousov > > > > wrote: > > > > > On Thu, Nov 05, 2015 at 12:32:18AM +0100, Mateusz Guzik > > > > > wrote: > > > > > > mtx_lock will unconditionally try to grab the lock and if > > > > > > that fails, > > > > > > will call __mtx_lock_sleep which will immediately try to do > > > > > > the same > > > > > > atomic op again. > > > > > > > > > > > > So, the obvious microoptimization is to check the state in > > > > > > __mtx_lock_sleep and avoid the operation if the lock is not > > > > > > free. > > > > > > > > > > > > This gives me ~40% speedup in a microbenchmark of 40 find > > > > > > processes > > > > > > traversing tmpfs and contending on mount mtx (only used as > > > > > > an easy > > > > > > benchmark, I have WIP patches to get rid of it). > > > > > > > > > > > > Second part of the patch is optional and just checks the > > > > > > state of the > > > > > > lock prior to doing any atomic operations, but it gives a > > > > > > very modest > > > > > > speed up when applied on top of the __mtx_lock_sleep > > > > > > change. As such, > > > > > > I'm not going to defend this part. > > > > > Shouldn't the same consideration applied to all spinning > > > > > loops, i.e. > > > > > also to the spin/thread mutexes, and to the spinning parts of > > > > > sx and > > > > > lockmgr ? > > > > > > > > I agree. I think both changes are good and worth doing in our > > > > other > > > > primitives. > > > > > > > > > > I glanced over e.g. rw_rlock and it did not have the issue, now > > > that I > > > see _sx_xlock_hard it wuld indeed use fixing. > > > > > > Expect a patch in few h for all primitives I'll find. I'll stress > > > test > > > the kernel, but it is unlikely I'll do microbenchmarks for > > > remaining > > > primitives. > > > > Is this stuff you're proposing still valid for non-x86 platforms? > > Yes. It just does a read before trying the atomic compare and swap > and > falls through to the hard case as if the atomic op failed if the > result > of the read would result in a compare failure. > The atomic ops include barriers, the new pre-read of the variable doesn't. Will that cause problems, especially for code inside a loop where the compiler may decide to shuffle things around? I suspect the performance gain will be biggest on the platforms where atomic ops are expensive (I gather from various code comments that's the case on x86). -- Ian