From owner-svn-src-all@FreeBSD.ORG Mon Nov 3 13:14:35 2014 Return-Path: Delivered-To: svn-src-all@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2359FD3D; Mon, 3 Nov 2014 13:14:35 +0000 (UTC) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id E9C6AEC0; Mon, 3 Nov 2014 13:14:34 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.9/8.14.9) with ESMTP id sA3DEY0c096437; Mon, 3 Nov 2014 13:14:34 GMT (envelope-from mjg@FreeBSD.org) Received: (from mjg@localhost) by svn.freebsd.org (8.14.9/8.14.9/Submit) id sA3DEYX7096436; Mon, 3 Nov 2014 13:14:34 GMT (envelope-from mjg@FreeBSD.org) Message-Id: <201411031314.sA3DEYX7096436@svn.freebsd.org> X-Authentication-Warning: svn.freebsd.org: mjg set sender to mjg@FreeBSD.org using -f From: Mateusz Guzik Date: Mon, 3 Nov 2014 13:14:34 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r274048 - head/sys/sys X-SVN-Group: head MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 03 Nov 2014 13:14:35 -0000 Author: mjg Date: Mon Nov 3 13:14:34 2014 New Revision: 274048 URL: https://svnweb.freebsd.org/changeset/base/274048 Log: Fix misplaced read memory barrier in seq. Impact on capability races was small: it was possible to get a spurious ENOTCAPABLE (early return), but it was not possible to bypass checks. Tidy up some comments. Modified: head/sys/sys/seq.h Modified: head/sys/sys/seq.h ============================================================================== --- head/sys/sys/seq.h Mon Nov 3 13:02:58 2014 (r274047) +++ head/sys/sys/seq.h Mon Nov 3 13:14:34 2014 (r274048) @@ -70,16 +70,16 @@ typedef uint32_t seq_t; #include /* - * This is a temporary hack until memory barriers are cleaned up. + * Stuff below is going away when we gain suitable memory barriers. * * atomic_load_acq_int at least on amd64 provides a full memory barrier, - * in a way which affects perforance. + * in a way which affects performance. * * Hack below covers all architectures and avoids most of the penalty at least - * on amd64. + * on amd64 but still has unnecessary cost. */ static __inline int -atomic_load_acq_rmb_int(volatile u_int *p) +atomic_load_rmb_int(volatile u_int *p) { volatile u_int v; @@ -88,6 +88,16 @@ atomic_load_acq_rmb_int(volatile u_int * return (v); } +static __inline int +atomic_rmb_load_int(volatile u_int *p) +{ + volatile u_int v = 0; + + atomic_load_acq_int(&v); + v = *p; + return (v); +} + static __inline bool seq_in_modify(seq_t seqp) { @@ -117,7 +127,7 @@ seq_read(seq_t *seqp) seq_t ret; for (;;) { - ret = atomic_load_acq_rmb_int(seqp); + ret = atomic_load_rmb_int(seqp); if (seq_in_modify(ret)) { cpu_spinwait(); continue; @@ -132,7 +142,7 @@ static __inline seq_t seq_consistent(seq_t *seqp, seq_t oldseq) { - return (atomic_load_acq_rmb_int(seqp) == oldseq); + return (atomic_rmb_load_int(seqp) == oldseq); } static __inline seq_t