From owner-svn-src-head@FreeBSD.ORG Thu May 24 20:45:44 2012 Return-Path: Delivered-To: svn-src-head@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id BA018106564A; Thu, 24 May 2012 20:45:44 +0000 (UTC) (envelope-from marcel@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:4f8:fff6::2c]) by mx1.freebsd.org (Postfix) with ESMTP id 9B35C8FC14; Thu, 24 May 2012 20:45:44 +0000 (UTC) Received: from svn.freebsd.org (localhost [127.0.0.1]) by svn.freebsd.org (8.14.4/8.14.4) with ESMTP id q4OKji9H059400; Thu, 24 May 2012 20:45:44 GMT (envelope-from marcel@svn.freebsd.org) Received: (from marcel@localhost) by svn.freebsd.org (8.14.4/8.14.4/Submit) id q4OKjipb059398; Thu, 24 May 2012 20:45:44 GMT (envelope-from marcel@svn.freebsd.org) Message-Id: <201205242045.q4OKjipb059398@svn.freebsd.org> From: Marcel Moolenaar Date: Thu, 24 May 2012 20:45:44 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org X-SVN-Group: head MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Cc: Subject: svn commit: r235931 - head/sys/powerpc/include X-BeenThere: svn-src-head@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: SVN commit messages for the src tree for head/-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 24 May 2012 20:45:44 -0000 Author: marcel Date: Thu May 24 20:45:44 2012 New Revision: 235931 URL: http://svn.freebsd.org/changeset/base/235931 Log: Fix the memory barriers for CPUs that do not like lwsync and wedge or cause exceptions early enough during boot that the kernel will do ithe same. Use lwsync only when compiling for LP64 and revert to the more proven isync when compiling for ILP32. Note that in the end (i.e. between revision 222198 and this change) ILP32 changed from using sync to using isync. As per Nathan the isync is needed to make sure I/O accesses are properly serialized with locks and isync tends to be more effecient than sync. While here, undefine __ATOMIC_ACQ and __ATOMIC_REL at the end of the file so as not to leak their definitions. Discussed with: nwhitehorn Modified: head/sys/powerpc/include/atomic.h Modified: head/sys/powerpc/include/atomic.h ============================================================================== --- head/sys/powerpc/include/atomic.h Thu May 24 20:24:49 2012 (r235930) +++ head/sys/powerpc/include/atomic.h Thu May 24 20:45:44 2012 (r235931) @@ -36,23 +36,30 @@ #error this file needs sys/cdefs.h as a prerequisite #endif -/* NOTE: lwsync is equivalent to sync on systems without lwsync */ -#define mb() __asm __volatile("lwsync" : : : "memory") -#ifdef __powerpc64__ -#define rmb() __asm __volatile("lwsync" : : : "memory") -#define wmb() __asm __volatile("lwsync" : : : "memory") -#else -#define rmb() __asm __volatile("lwsync" : : : "memory") -#define wmb() __asm __volatile("eieio" : : : "memory") -#endif - /* * The __ATOMIC_REL/ACQ() macros provide memory barriers only in conjunction - * with the atomic lXarx/stXcx. sequences below. See Appendix B.2 of Book II - * of the architecture manual. + * with the atomic lXarx/stXcx. sequences below. They are not exposed outside + * of this file. See also Appendix B.2 of Book II of the architecture manual. + * + * Note that not all Book-E processors accept the light-weight sync variant. + * In particular, early models of E500 cores are known to wedge. Bank on all + * 64-bit capable CPUs to accept lwsync properly and pressimize 32-bit CPUs + * to use the heavier-weight sync. */ + +#ifdef __powerpc64__ +#define mb() __asm __volatile("lwsync" : : : "memory") +#define rmb() __asm __volatile("lwsync" : : : "memory") +#define wmb() __asm __volatile("lwsync" : : : "memory") #define __ATOMIC_REL() __asm __volatile("lwsync" : : : "memory") +#define __ATOMIC_ACQ() __asm __volatile("lwsync" : : : "memory") +#else +#define mb() __asm __volatile("isync" : : : "memory") +#define rmb() __asm __volatile("isync" : : : "memory") +#define wmb() __asm __volatile("isync" : : : "memory") +#define __ATOMIC_REL() __asm __volatile("isync" : : : "memory") #define __ATOMIC_ACQ() __asm __volatile("isync" : : : "memory") +#endif /* * atomic_add(p, v) @@ -683,4 +690,7 @@ atomic_fetchadd_long(volatile u_long *p, #define atomic_fetchadd_64 atomic_fetchadd_long #endif +#undef __ATOMIC_REL +#undef __ATOMIC_ACQ + #endif /* ! _MACHINE_ATOMIC_H_ */