Date: Thu, 5 Nov 2015 00:32:18 +0100 From: Mateusz Guzik <mjguzik@gmail.com> To: freebsd-current@freebsd.org Subject: [PATCH] microoptimize by trying to avoid locking a locked mutex Message-ID: <20151104233218.GA27709@dft-labs.eu>
next in thread | raw e-mail | index | archive | help
mtx_lock will unconditionally try to grab the lock and if that fails, will call __mtx_lock_sleep which will immediately try to do the same atomic op again. So, the obvious microoptimization is to check the state in __mtx_lock_sleep and avoid the operation if the lock is not free. This gives me ~40% speedup in a microbenchmark of 40 find processes traversing tmpfs and contending on mount mtx (only used as an easy benchmark, I have WIP patches to get rid of it). Second part of the patch is optional and just checks the state of the lock prior to doing any atomic operations, but it gives a very modest speed up when applied on top of the __mtx_lock_sleep change. As such, I'm not going to defend this part. x vanilla + patched +--------------------------------------------------------------------------------------------------------------------------------------+ | + | |+ ++ ++ ++ + x x x xx xx x x| | |_____AM____| |____________A_M__________| | +--------------------------------------------------------------------------------------------------------------------------------------+ N Min Max Median Avg Stddev x 9 13.845 16.148 15.271 15.133889 0.75997096 + 9 8.363 9.56 9.126 9.0643333 0.34198501 Difference at 95.0% confidence -6.06956 +/- 0.588917 -40.1057% +/- 3.89138% (Student's t, pooled s = 0.589283) x patched + patched2 +--------------------------------------------------------------------------------------------------------------------------------------+ | + | |+ * + + + + x x + + x x x x x x| | |____________________________A_____M______|_______________|______A___M__________________| | +--------------------------------------------------------------------------------------------------------------------------------------+ N Min Max Median Avg Stddev x 9 8.363 9.56 9.126 9.0643333 0.34198501 + 9 7.563 9.038 8.611 8.5256667 0.43365885 Difference at 95.0% confidence -0.538667 +/- 0.390278 -5.94271% +/- 4.30565% (Student's t, pooled s = 0.390521) diff --git a/sys/kern/kern_mutex.c b/sys/kern/kern_mutex.c index bec8f6b..092aaae 100644 --- a/sys/kern/kern_mutex.c +++ b/sys/kern/kern_mutex.c @@ -419,7 +419,10 @@ __mtx_lock_sleep(volatile uintptr_t *c, uintptr_t tid, int opts, all_time -= lockstat_nsecs(&m->lock_object); #endif - while (!_mtx_obtain_lock(m, tid)) { + for (;;) { + v = m->mtx_lock; + if (v == MTX_UNOWNED && _mtx_obtain_lock(m, tid)) + break; #ifdef KDTRACE_HOOKS spin_cnt++; #endif @@ -428,7 +431,6 @@ __mtx_lock_sleep(volatile uintptr_t *c, uintptr_t tid, int opts, * If the owner is running on another CPU, spin until the * owner stops running or the state of the lock changes. */ - v = m->mtx_lock; if (v != MTX_UNOWNED) { owner = (struct thread *)(v & ~MTX_FLAGMASK); if (TD_IS_RUNNING(owner)) { diff --git a/sys/sys/mutex.h b/sys/sys/mutex.h index a9ec072..4208d5f 100644 --- a/sys/sys/mutex.h +++ b/sys/sys/mutex.h @@ -184,12 +184,11 @@ void thread_lock_flags_(struct thread *, int, const char *, int); /* Lock a normal mutex. */ #define __mtx_lock(mp, tid, opts, file, line) do { \ uintptr_t _tid = (uintptr_t)(tid); \ - \ - if (!_mtx_obtain_lock((mp), _tid)) \ - _mtx_lock_sleep((mp), _tid, (opts), (file), (line)); \ - else \ + if (((mp)->mtx_lock == MTX_UNOWNED && _mtx_obtain_lock((mp), _tid))) \ LOCKSTAT_PROFILE_OBTAIN_LOCK_SUCCESS(adaptive__acquire, \ mp, 0, 0, file, line); \ + else \ + _mtx_lock_sleep((mp), _tid, (opts), (file), (line)); \ } while (0) /* -- Mateusz Guzik <mjguzik gmail.com>
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20151104233218.GA27709>