Date: Thu, 05 Nov 2015 14:19:11 -0800 From: John Baldwin <jhb@freebsd.org> To: Adrian Chadd <adrian.chadd@gmail.com> Cc: Mateusz Guzik <mjguzik@gmail.com>, freebsd-current <freebsd-current@freebsd.org>, Konstantin Belousov <kostikbel@gmail.com> Subject: Re: [PATCH] microoptimize by trying to avoid locking a locked mutex Message-ID: <1563180.x0Z3Ou4xid@ralph.baldwin.cx> In-Reply-To: <CAJ-VmonnH4JJg0XqX1SoBXBa%2B9Xfmk%2BHFv58ETaQ9v1-uAAhdQ@mail.gmail.com> References: <20151104233218.GA27709@dft-labs.eu> <20151105192623.GB27709@dft-labs.eu> <CAJ-VmonnH4JJg0XqX1SoBXBa%2B9Xfmk%2BHFv58ETaQ9v1-uAAhdQ@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Thursday, November 05, 2015 01:45:19 PM Adrian Chadd wrote: > On 5 November 2015 at 11:26, Mateusz Guzik <mjguzik@gmail.com> wrote: > > On Thu, Nov 05, 2015 at 11:04:13AM -0800, John Baldwin wrote: > >> On Thursday, November 05, 2015 04:26:28 PM Konstantin Belousov wrote: > >> > On Thu, Nov 05, 2015 at 12:32:18AM +0100, Mateusz Guzik wrote: > >> > > mtx_lock will unconditionally try to grab the lock and if that fails, > >> > > will call __mtx_lock_sleep which will immediately try to do the same > >> > > atomic op again. > >> > > > >> > > So, the obvious microoptimization is to check the state in > >> > > __mtx_lock_sleep and avoid the operation if the lock is not free. > >> > > > >> > > This gives me ~40% speedup in a microbenchmark of 40 find processes > >> > > traversing tmpfs and contending on mount mtx (only used as an easy > >> > > benchmark, I have WIP patches to get rid of it). > >> > > > >> > > Second part of the patch is optional and just checks the state of the > >> > > lock prior to doing any atomic operations, but it gives a very modest > >> > > speed up when applied on top of the __mtx_lock_sleep change. As such, > >> > > I'm not going to defend this part. > >> > Shouldn't the same consideration applied to all spinning loops, i.e. > >> > also to the spin/thread mutexes, and to the spinning parts of sx and > >> > lockmgr ? > >> > >> I agree. I think both changes are good and worth doing in our other > >> primitives. > >> > > > > I glanced over e.g. rw_rlock and it did not have the issue, now that I > > see _sx_xlock_hard it wuld indeed use fixing. > > > > Expect a patch in few h for all primitives I'll find. I'll stress test > > the kernel, but it is unlikely I'll do microbenchmarks for remaining > > primitives. > > Is this stuff you're proposing still valid for non-x86 platforms? Yes. It just does a read before trying the atomic compare and swap and falls through to the hard case as if the atomic op failed if the result of the read would result in a compare failure. -- John Baldwin
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1563180.x0Z3Ou4xid>