Date: Fri, 24 Jul 2015 19:43:19 +0000 (UTC) From: Alan Cox <alc@FreeBSD.org> To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r285854 - head/sys/amd64/include Message-ID: <201507241943.t6OJhJaq090500@repo.freebsd.org>
next in thread | raw e-mail | index | archive | help
Author: alc Date: Fri Jul 24 19:43:18 2015 New Revision: 285854 URL: https://svnweb.freebsd.org/changeset/base/285854 Log: Add a comment discussing the appropriate use of the atomic_*() functions with acquire and release semantics versus the *mb() functions on amd64 processors. Reviewed by: bde (an earlier version), kib Sponsored by: EMC / Isilon Storage Division Modified: head/sys/amd64/include/atomic.h Modified: head/sys/amd64/include/atomic.h ============================================================================== --- head/sys/amd64/include/atomic.h Fri Jul 24 19:37:30 2015 (r285853) +++ head/sys/amd64/include/atomic.h Fri Jul 24 19:43:18 2015 (r285854) @@ -32,6 +32,25 @@ #error this file needs sys/cdefs.h as a prerequisite #endif +/* + * To express interprocessor (as opposed to processor and device) memory + * ordering constraints, use the atomic_*() functions with acquire and release + * semantics rather than the *mb() functions. An architecture's memory + * ordering (or memory consistency) model governs the order in which a + * program's accesses to different locations may be performed by an + * implementation of that architecture. In general, for memory regions + * defined as writeback cacheable, the memory ordering implemented by amd64 + * processors preserves the program ordering of a load followed by a load, a + * load followed by a store, and a store followed by a store. Only a store + * followed by a load to a different memory location may be reordered. + * Therefore, except for special cases, like non-temporal memory accesses or + * memory regions defined as write combining, the memory ordering effects + * provided by the sfence instruction in the wmb() function and the lfence + * instruction in the rmb() function are redundant. In contrast, the + * atomic_*() functions with acquire and release semantics do not perform + * redundant instructions for ordinary cases of interprocessor memory + * ordering on any architecture. + */ #define mb() __asm __volatile("mfence;" : : : "memory") #define wmb() __asm __volatile("sfence;" : : : "memory") #define rmb() __asm __volatile("lfence;" : : : "memory")
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201507241943.t6OJhJaq090500>