Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 24 May 2012 20:45:44 +0000 (UTC)
From:      Marcel Moolenaar <marcel@FreeBSD.org>
To:        src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org
Subject:   svn commit: r235931 - head/sys/powerpc/include
Message-ID:  <201205242045.q4OKjipb059398@svn.freebsd.org>

next in thread | raw e-mail | index | archive | help
Author: marcel
Date: Thu May 24 20:45:44 2012
New Revision: 235931
URL: http://svn.freebsd.org/changeset/base/235931

Log:
  Fix the memory barriers for CPUs that do not like lwsync and wedge or cause
  exceptions early enough during boot that the kernel will do ithe same.
  Use lwsync only when compiling for LP64 and revert to the more proven isync
  when compiling for ILP32. Note that in the end (i.e. between revision 222198
  and this change) ILP32 changed from using sync to using isync. As per Nathan
  the isync is needed to make sure I/O accesses are properly serialized with
  locks and isync tends to be more effecient than sync.
  
  While here, undefine __ATOMIC_ACQ and __ATOMIC_REL at the end of the file
  so as not to leak their definitions.
  
  Discussed with: nwhitehorn

Modified:
  head/sys/powerpc/include/atomic.h

Modified: head/sys/powerpc/include/atomic.h
==============================================================================
--- head/sys/powerpc/include/atomic.h	Thu May 24 20:24:49 2012	(r235930)
+++ head/sys/powerpc/include/atomic.h	Thu May 24 20:45:44 2012	(r235931)
@@ -36,23 +36,30 @@
 #error this file needs sys/cdefs.h as a prerequisite
 #endif
 
-/* NOTE: lwsync is equivalent to sync on systems without lwsync */
-#define mb()		__asm __volatile("lwsync" : : : "memory")
-#ifdef __powerpc64__
-#define rmb()		__asm __volatile("lwsync" : : : "memory")
-#define wmb()		__asm __volatile("lwsync" : : : "memory")
-#else
-#define rmb()		__asm __volatile("lwsync" : : : "memory")
-#define wmb()		__asm __volatile("eieio" : : : "memory")
-#endif
-
 /*
  * The __ATOMIC_REL/ACQ() macros provide memory barriers only in conjunction
- * with the atomic lXarx/stXcx. sequences below. See Appendix B.2 of Book II
- * of the architecture manual.
+ * with the atomic lXarx/stXcx. sequences below. They are not exposed outside
+ * of this file. See also Appendix B.2 of Book II of the architecture manual.
+ *
+ * Note that not all Book-E processors accept the light-weight sync variant.
+ * In particular, early models of E500 cores are known to wedge. Bank on all
+ * 64-bit capable CPUs to accept lwsync properly and pressimize 32-bit CPUs
+ * to use the heavier-weight sync.
  */
+
+#ifdef __powerpc64__
+#define mb()		__asm __volatile("lwsync" : : : "memory")
+#define rmb()		__asm __volatile("lwsync" : : : "memory")
+#define wmb()		__asm __volatile("lwsync" : : : "memory")
 #define __ATOMIC_REL()	__asm __volatile("lwsync" : : : "memory")
+#define __ATOMIC_ACQ()	__asm __volatile("lwsync" : : : "memory")
+#else
+#define mb()		__asm __volatile("isync" : : : "memory")
+#define rmb()		__asm __volatile("isync" : : : "memory")
+#define wmb()		__asm __volatile("isync" : : : "memory")
+#define __ATOMIC_REL()	__asm __volatile("isync" : : : "memory")
 #define __ATOMIC_ACQ()	__asm __volatile("isync" : : : "memory")
+#endif
 
 /*
  * atomic_add(p, v)
@@ -683,4 +690,7 @@ atomic_fetchadd_long(volatile u_long *p,
 #define	atomic_fetchadd_64	atomic_fetchadd_long
 #endif
 
+#undef __ATOMIC_REL
+#undef __ATOMIC_ACQ
+
 #endif /* ! _MACHINE_ATOMIC_H_ */



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201205242045.q4OKjipb059398>