Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 25 May 2018 01:15:51 +0000
From:      Mateusz Guzik <mjg@freebsd.org>
To:        src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org
Subject:   svn commit: r334197 - head/sys/kern
Message-ID:  <20180525011551.GB19943@freefall.freebsd.org>

next in thread | raw e-mail | index | archive | help
Author: mjg
Date: Thu May 25 23:58:57 2018
New Revision: 334197
URL: https://svnweb.freebsd.org/changeset/base/334197

Log:
  Implement Mostly Exclusive locks.

  High lock contention is one of the biggest scalability bottlenecks on
  multicore systems. Although the real fix consists of making lock
  consumers better suited for multicore operation, locking primitive
  performance is still a crucial factor in real-world systems.

  It turns out that a lot of the locking is overzealous - the lock is
  held longer than it needs to be and sometimes even completely
  unnecessarily in the first place. Even when lock is needed in
  principle, all relevant consumers may be only *reading* the state
  protected by the lock or modifying disjoint parts of protected data.

  As such, a lot of the locking can be elided.

  The idea boils down to trying to take the lock for a limited amount of
  time and in case of failure pretending we got it anyway. The approach
  along with practical analysis of performance win/new crash ratio is
  described in "Towards reliability-oblivious low latency locking" by
  Leland Oller.

Modified:
  head/sys/kern/kern_mutex.c

Modified: head/sys/kern/kern_mutex.c
===================================================================
--- sys/kern/kern_mutex.c	(revision 334196)
+++ sys/kern/kern_mutex.c	(working copy)
@@ -557,6 +557,9 @@ __mtx_lock_sleep(volatile uintptr_t *c, uintptr_t
 		lda.spin_cnt++;
 #endif
 #ifdef ADAPTIVE_MUTEXES
+		if (lda.spin_cnt > 16384)
+			break;
+
 		/*
 		 * If the owner is running on another CPU, spin until the
 		 * owner stops running or the state of the lock changes.
@@ -1020,16 +1023,22 @@ __mtx_unlock_sleep(volatile uintptr_t *c, uintptr_
 	turnstile_chain_lock(&m->lock_object);
 	_mtx_release_lock_quick(m);
 	ts = turnstile_lookup(&m->lock_object);
-	MPASS(ts != NULL);
-	if (LOCK_LOG_TEST(&m->lock_object, opts))
-		CTR1(KTR_LOCK, "_mtx_unlock_sleep: %p contested", m);
-	turnstile_broadcast(ts, TS_EXCLUSIVE_QUEUE);
-
 	/*
-	 * This turnstile is now no longer associated with the mutex.  We can
-	 * unlock the chain lock so a new turnstile may take it's place.
+	 * We failed to previously grab the lock. Unlock fast path brings
+	 * us here thinking there are blocked threads, but there may be
+	 * none
 	 */
-	turnstile_unpend(ts, TS_EXCLUSIVE_LOCK);
+	if (ts != NULL) {
+		if (LOCK_LOG_TEST(&m->lock_object, opts))
+			CTR1(KTR_LOCK, "_mtx_unlock_sleep: %p contested", m);
+		turnstile_broadcast(ts, TS_EXCLUSIVE_QUEUE);
+
+		/*
+		 * This turnstile is now no longer associated with the mutex.  We can
+		 * unlock the chain lock so a new turnstile may take it's place.
+		 */
+		turnstile_unpend(ts, TS_EXCLUSIVE_LOCK);
+	}
 	turnstile_chain_unlock(&m->lock_object);
 }



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20180525011551.GB19943>