From owner-freebsd-arch@FreeBSD.ORG Sat Oct 4 07:11:45 2014 Return-Path: Delivered-To: freebsd-arch@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 58A4026D; Sat, 4 Oct 2014 07:11:45 +0000 (UTC) Received: from kib.kiev.ua (kib.kiev.ua [IPv6:2001:470:d5e7:1::1]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id D5C5EC46; Sat, 4 Oct 2014 07:11:44 +0000 (UTC) Received: from tom.home (kostik@localhost [127.0.0.1]) by kib.kiev.ua (8.14.9/8.14.9) with ESMTP id s947BdnI010516 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Sat, 4 Oct 2014 10:11:39 +0300 (EEST) (envelope-from kostikbel@gmail.com) DKIM-Filter: OpenDKIM Filter v2.9.2 kib.kiev.ua s947BdnI010516 Received: (from kostik@localhost) by tom.home (8.14.9/8.14.9/Submit) id s947BdF0010515; Sat, 4 Oct 2014 10:11:39 +0300 (EEST) (envelope-from kostikbel@gmail.com) X-Authentication-Warning: tom.home: kostik set sender to kostikbel@gmail.com using -f Date: Sat, 4 Oct 2014 10:11:39 +0300 From: Konstantin Belousov To: Mateusz Guzik Subject: Re: [PATCH 1/2] Implement simple sequence counters with memory barriers. Message-ID: <20141004071139.GL26076@kib.kiev.ua> References: <1408064112-573-1-git-send-email-mjguzik@gmail.com> <1408064112-573-2-git-send-email-mjguzik@gmail.com> <20140816093811.GX2737@kib.kiev.ua> <20140816185406.GD2737@kib.kiev.ua> <20140817012646.GA21025@dft-labs.eu> <20141004052851.GA27891@dft-labs.eu> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20141004052851.GA27891@dft-labs.eu> User-Agent: Mutt/1.5.23 (2014-03-12) X-Spam-Status: No, score=-2.0 required=5.0 tests=ALL_TRUSTED,BAYES_00, DKIM_ADSP_CUSTOM_MED,FREEMAIL_FROM,NML_ADSP_CUSTOM_MED autolearn=no autolearn_force=no version=3.4.0 X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on tom.home Cc: alc@freebsd.org, attilio@freebsd.org, Johan Schuijt , "freebsd-arch@freebsd.org" X-BeenThere: freebsd-arch@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Discussion related to FreeBSD architecture List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 04 Oct 2014 07:11:45 -0000 On Sat, Oct 04, 2014 at 07:28:51AM +0200, Mateusz Guzik wrote: > Reviving. Sorry everyone for such big delay, $life. > > On Tue, Aug 19, 2014 at 02:24:16PM -0500, Alan Cox wrote: > > On Sat, Aug 16, 2014 at 8:26 PM, Mateusz Guzik wrote: > > > Well, my memory-barrier-and-so-on-fu is rather weak. > > > > > > I had another look at the issue. At least on amd64, it looks like only > > > compiler barrier is required for both reads and writes. > > > > > > According to AMD64 Architecture Programmer???s Manual Volume 2: System > > > Programming, 7.2 Multiprocessor Memory Access Ordering states: > > > > > > "Loads do not pass previous loads (loads are not reordered). Stores do > > > not pass previous stores (stores are not reordered)" > > > > > > Since the code modifying stuff only performs a series of writes and we > > > expect exclusive writers, I find it applicable to this scenario. > > > > > > I checked linux sources and generated assembly, they indeed issue only > > > a compiler barrier on amd64 (and for intel processors as well). > > > > > > atomic_store_rel_int on amd64 seems fine in this regard, but the only > > > function for loads issues lock cmpxhchg which kills performance > > > (median 55693659 -> 12789232 ops in a microbenchmark) for no gain. > > > > > > Additionally release and acquire semantics seems to be a stronger than > > > needed guarantee. > > > > > > > > > > This statement left me puzzled and got me to look at our x86 atomic.h for > > the first time in years. It appears that our implementation of > > atomic_load_acq_int() on x86 is, umm ..., unconventional. That is, it is > > enforcing a constraint that simple acquire loads don't normally enforce. > > For example, the C11 stdatomic.h simple acquire load doesn't enforce this > > constraint. Moreover, our own implementation of atomic_load_acq_int() on > > ia64, where the mapping from atomic_load_acq_int() to machine instructions > > is straightforward, doesn't enforce this constraint either. > > > > By 'this constraint' I presume you mean full memory barrier. > > It is unclear to me if one can just get rid of it currently. It > definitely would be beneficial. > > In the meantime, if for some reason full barrier is still needed, we can > speed up concurrent load_acq of the same var considerably. There is no > need to lock cmpxchg on the same address. We should be able to replace > it with +/-: > lock add $0,(%rsp); > movl ...; > > I believe it is possible that cpu will perform some writes before doing > read listed here, but this should be fine. > > If this is considered too risky to hit 10.1, I would like to implement > it within seq as a temporary hack to be fixed up later. > > something along: > static inline int > atomic_load_acq_rmb(volatile u_int *p) > { > volaitle u_int *v; > > v = *p; > atomic_load_acq(&v); > return (v); > } Do you need it as designated primitive ? I think you could write this inline for the purpose of getting the fix into 10.1. With the inline quirk, I think that the fix should go into the HEAD now, with some reasonable MFC timer. > > This hack fixes aforementioned performance degradation and covers all > architectures. > > > Give us a chance to sort this out before you do anything further. As > > Kostik said, but in different words, we've always written our > > machine-independent layer code using acquires and releases to express the > > required ordering constraints and not {r,w}mb() primitives. > > > > -- > Mateusz Guzik