From owner-svn-src-all@freebsd.org Wed Jul 8 18:37:09 2015 Return-Path: Delivered-To: svn-src-all@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id A4D8A995C17; Wed, 8 Jul 2015 18:37:09 +0000 (UTC) (envelope-from kib@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 7B4AE15CF; Wed, 8 Jul 2015 18:37:09 +0000 (UTC) (envelope-from kib@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.70]) by repo.freebsd.org (8.14.9/8.14.9) with ESMTP id t68Ib9r3069694; Wed, 8 Jul 2015 18:37:09 GMT (envelope-from kib@FreeBSD.org) Received: (from kib@localhost) by repo.freebsd.org (8.14.9/8.14.9/Submit) id t68Ib9ta069693; Wed, 8 Jul 2015 18:37:09 GMT (envelope-from kib@FreeBSD.org) Message-Id: <201507081837.t68Ib9ta069693@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: kib set sender to kib@FreeBSD.org using -f From: Konstantin Belousov Date: Wed, 8 Jul 2015 18:37:09 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r285285 - head/sys/sys X-SVN-Group: head MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Jul 2015 18:37:09 -0000 Author: kib Date: Wed Jul 8 18:37:08 2015 New Revision: 285285 URL: https://svnweb.freebsd.org/changeset/base/285285 Log: Use atomic_fence_fence_rel() to ensure ordering in the seq_write_begin(), instead of the load_rmb/rbm_load functions. The update does not need to be atomic due to the write lock owned. Similarly, in seq_write_end(), update of *seqp needs not be atomic. Only store must be atomic with release. For seq_read(), the natural operation is the load acquire of the sequence value, express this directly with atomic_load_acq_int() instead of using custom partial fence implementation atomic_load_rmb_int(). In seq_consistent, use atomic_thread_fence_acq() which provides the desired semantic of ordering reads before fence before the re-reading of *seqp, instead of custom atomic_rmb_load_int(). Reviewed by: alc, bde Sponsored by: The FreeBSD Foundation MFC after: 3 weeks Modified: head/sys/sys/seq.h Modified: head/sys/sys/seq.h ============================================================================== --- head/sys/sys/seq.h Wed Jul 8 18:36:37 2015 (r285284) +++ head/sys/sys/seq.h Wed Jul 8 18:37:08 2015 (r285285) @@ -69,35 +69,6 @@ typedef uint32_t seq_t; #include -/* - * Stuff below is going away when we gain suitable memory barriers. - * - * atomic_load_acq_int at least on amd64 provides a full memory barrier, - * in a way which affects performance. - * - * Hack below covers all architectures and avoids most of the penalty at least - * on amd64 but still has unnecessary cost. - */ -static __inline int -atomic_load_rmb_int(volatile const u_int *p) -{ - volatile u_int v; - - v = *p; - atomic_load_acq_int(&v); - return (v); -} - -static __inline int -atomic_rmb_load_int(volatile const u_int *p) -{ - volatile u_int v = 0; - - atomic_load_acq_int(&v); - v = *p; - return (v); -} - static __inline bool seq_in_modify(seq_t seqp) { @@ -110,14 +81,15 @@ seq_write_begin(seq_t *seqp) { MPASS(!seq_in_modify(*seqp)); - atomic_add_acq_int(seqp, 1); + *seqp += 1; + atomic_thread_fence_rel(); } static __inline void seq_write_end(seq_t *seqp) { - atomic_add_rel_int(seqp, 1); + atomic_store_rel_int(seqp, *seqp + 1); MPASS(!seq_in_modify(*seqp)); } @@ -127,7 +99,7 @@ seq_read(const seq_t *seqp) seq_t ret; for (;;) { - ret = atomic_load_rmb_int(seqp); + ret = atomic_load_acq_int(__DECONST(seq_t *, seqp)); if (seq_in_modify(ret)) { cpu_spinwait(); continue; @@ -142,7 +114,8 @@ static __inline seq_t seq_consistent(const seq_t *seqp, seq_t oldseq) { - return (atomic_rmb_load_int(seqp) == oldseq); + atomic_thread_fence_acq(); + return (*seqp == oldseq); } static __inline seq_t