From owner-svn-src-head@freebsd.org Sun Mar 4 21:38:31 2018 Return-Path: Delivered-To: svn-src-head@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id CB421F35C3E; Sun, 4 Mar 2018 21:38:31 +0000 (UTC) (envelope-from mjg@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 818E8715B4; Sun, 4 Mar 2018 21:38:31 +0000 (UTC) (envelope-from mjg@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 7C4F71AE6C; Sun, 4 Mar 2018 21:38:31 +0000 (UTC) (envelope-from mjg@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id w24LcVUJ088122; Sun, 4 Mar 2018 21:38:31 GMT (envelope-from mjg@FreeBSD.org) Received: (from mjg@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id w24LcVAa088120; Sun, 4 Mar 2018 21:38:31 GMT (envelope-from mjg@FreeBSD.org) Message-Id: <201803042138.w24LcVAa088120@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: mjg set sender to mjg@FreeBSD.org using -f From: Mateusz Guzik Date: Sun, 4 Mar 2018 21:38:31 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r330414 - head/sys/kern X-SVN-Group: head X-SVN-Commit-Author: mjg X-SVN-Commit-Paths: head/sys/kern X-SVN-Commit-Revision: 330414 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-head@freebsd.org X-Mailman-Version: 2.1.25 Precedence: list List-Id: SVN commit messages for the src tree for head/-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 04 Mar 2018 21:38:32 -0000 Author: mjg Date: Sun Mar 4 21:38:30 2018 New Revision: 330414 URL: https://svnweb.freebsd.org/changeset/base/330414 Log: locks: fix a corner case in r327399 If there were exactly rowner_retries/asx_retries (by default: 10) transitions between read and write state and the waiters still did not get the lock, the next owner -> reader transition would result in the code correctly falling back to turnstile/sleepq where it would incorrectly think it was waiting for a writer and decide to leave turnstile/sleepq to loop back. From this point it would take ts/sq trips until the lock gets released. The bug sometimes manifested itself in stalls during -j 128 package builds. Refactor the code to fix the bug, while here remove some of the gratituous differences between rw and sx locks. Modified: head/sys/kern/kern_rwlock.c head/sys/kern/kern_sx.c Modified: head/sys/kern/kern_rwlock.c ============================================================================== --- head/sys/kern/kern_rwlock.c Sun Mar 4 21:15:31 2018 (r330413) +++ head/sys/kern/kern_rwlock.c Sun Mar 4 21:38:30 2018 (r330414) @@ -875,7 +875,7 @@ __rw_wlock_hard(volatile uintptr_t *c, uintptr_t v LOC #ifdef ADAPTIVE_RWLOCKS int spintries = 0; int i, n; - int sleep_reason = 0; + enum { READERS, WRITER } sleep_reason; #endif uintptr_t x; #ifdef LOCK_PROFILING @@ -956,9 +956,11 @@ __rw_wlock_hard(volatile uintptr_t *c, uintptr_t v LOC * running on another CPU, spin until the owner stops * running or the state of the lock changes. */ - sleep_reason = 1; - owner = lv_rw_wowner(v); - if (!(v & RW_LOCK_READ) && TD_IS_RUNNING(owner)) { + if (!(v & RW_LOCK_READ)) { + sleep_reason = WRITER; + owner = lv_rw_wowner(v); + if (!TD_IS_RUNNING(owner)) + goto ts; if (LOCK_LOG_TEST(&rw->lock_object, 0)) CTR3(KTR_LOCK, "%s: spinning on %p held by %p", __func__, rw, owner); @@ -973,9 +975,10 @@ __rw_wlock_hard(volatile uintptr_t *c, uintptr_t v LOC KTR_STATE0(KTR_SCHED, "thread", sched_tdname(curthread), "running"); continue; - } - if ((v & RW_LOCK_READ) && RW_READERS(v) && - spintries < rowner_retries) { + } else if (RW_READERS(v) > 0) { + sleep_reason = READERS; + if (spintries == rowner_retries) + goto ts; if (!(v & RW_LOCK_WRITE_SPINNER)) { if (!atomic_fcmpset_ptr(&rw->rw_lock, &v, v | RW_LOCK_WRITE_SPINNER)) { @@ -993,15 +996,15 @@ __rw_wlock_hard(volatile uintptr_t *c, uintptr_t v LOC if ((v & RW_LOCK_WRITE_SPINNER) == 0) break; } - KTR_STATE0(KTR_SCHED, "thread", sched_tdname(curthread), - "running"); #ifdef KDTRACE_HOOKS - lda.spin_cnt += rowner_loops - i; + lda.spin_cnt += i; #endif + KTR_STATE0(KTR_SCHED, "thread", sched_tdname(curthread), + "running"); if (i < rowner_loops) continue; - sleep_reason = 2; } +ts: #endif ts = turnstile_trywait(&rw->lock_object); v = RW_READ_VALUE(rw); @@ -1021,7 +1024,7 @@ retry_ts: turnstile_cancel(ts); continue; } - } else if (RW_READERS(v) > 0 && sleep_reason == 1) { + } else if (RW_READERS(v) > 0 && sleep_reason == WRITER) { turnstile_cancel(ts); continue; } Modified: head/sys/kern/kern_sx.c ============================================================================== --- head/sys/kern/kern_sx.c Sun Mar 4 21:15:31 2018 (r330413) +++ head/sys/kern/kern_sx.c Sun Mar 4 21:38:30 2018 (r330414) @@ -533,8 +533,8 @@ _sx_xlock_hard(struct sx *sx, uintptr_t x, int opts LO #ifdef ADAPTIVE_SX volatile struct thread *owner; u_int i, n, spintries = 0; + enum { READERS, WRITER } sleep_reason; bool adaptive; - int sleep_reason = 0; #endif #ifdef LOCK_PROFILING uint64_t waittime = 0; @@ -628,37 +628,33 @@ _sx_xlock_hard(struct sx *sx, uintptr_t x, int opts LO * running or the state of the lock changes. */ if ((x & SX_LOCK_SHARED) == 0) { + sleep_reason = WRITER; owner = lv_sx_owner(x); - if (TD_IS_RUNNING(owner)) { - if (LOCK_LOG_TEST(&sx->lock_object, 0)) - CTR3(KTR_LOCK, - "%s: spinning on %p held by %p", - __func__, sx, owner); - KTR_STATE1(KTR_SCHED, "thread", - sched_tdname(curthread), "spinning", - "lockname:\"%s\"", - sx->lock_object.lo_name); - do { - lock_delay(&lda); - x = SX_READ_VALUE(sx); - owner = lv_sx_owner(x); - } while (owner != NULL && - TD_IS_RUNNING(owner)); - KTR_STATE0(KTR_SCHED, "thread", - sched_tdname(curthread), "running"); - continue; - } - sleep_reason = 1; - } else if (SX_SHARERS(x) && spintries < asx_retries) { - KTR_STATE1(KTR_SCHED, "thread", - sched_tdname(curthread), "spinning", - "lockname:\"%s\"", sx->lock_object.lo_name); + if (!TD_IS_RUNNING(owner)) + goto sleepq; + if (LOCK_LOG_TEST(&sx->lock_object, 0)) + CTR3(KTR_LOCK, "%s: spinning on %p held by %p", + __func__, sx, owner); + KTR_STATE1(KTR_SCHED, "thread", sched_tdname(curthread), + "spinning", "lockname:\"%s\"", + sx->lock_object.lo_name); + do { + lock_delay(&lda); + x = SX_READ_VALUE(sx); + owner = lv_sx_owner(x); + } while (owner != NULL && TD_IS_RUNNING(owner)); + KTR_STATE0(KTR_SCHED, "thread", sched_tdname(curthread), + "running"); + continue; + } else if (SX_SHARERS(x) > 0) { + sleep_reason = READERS; + if (spintries == asx_retries) + goto sleepq; spintries++; + KTR_STATE1(KTR_SCHED, "thread", sched_tdname(curthread), + "spinning", "lockname:\"%s\"", + sx->lock_object.lo_name); for (i = 0; i < asx_loops; i += n) { - if (LOCK_LOG_TEST(&sx->lock_object, 0)) - CTR4(KTR_LOCK, - "%s: shared spinning on %p with %u and %u", - __func__, sx, spintries, i); n = SX_SHARERS(x); lock_delay_spin(n); x = SX_READ_VALUE(sx); @@ -669,11 +665,10 @@ _sx_xlock_hard(struct sx *sx, uintptr_t x, int opts LO #ifdef KDTRACE_HOOKS lda.spin_cnt += i; #endif - KTR_STATE0(KTR_SCHED, "thread", - sched_tdname(curthread), "running"); + KTR_STATE0(KTR_SCHED, "thread", sched_tdname(curthread), + "running"); if (i < asx_loops) continue; - sleep_reason = 2; } sleepq: #endif @@ -705,7 +700,7 @@ retry_sleepq: sleepq_release(&sx->lock_object); continue; } - } else if (SX_SHARERS(x) > 0 && sleep_reason == 1) { + } else if (SX_SHARERS(x) > 0 && sleep_reason == WRITER) { sleepq_release(&sx->lock_object); continue; }