From owner-svn-src-all@FreeBSD.ORG Mon Sep 7 08:37:25 2009 Return-Path: Delivered-To: svn-src-all@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 85D73106566B; Mon, 7 Sep 2009 08:37:25 +0000 (UTC) (envelope-from attilio@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:4f8:fff6::2c]) by mx1.freebsd.org (Postfix) with ESMTP id 6A5D08FC0A; Mon, 7 Sep 2009 08:37:25 +0000 (UTC) Received: from svn.freebsd.org (localhost [127.0.0.1]) by svn.freebsd.org (8.14.3/8.14.3) with ESMTP id n878bPNV008237; Mon, 7 Sep 2009 08:37:25 GMT (envelope-from attilio@svn.freebsd.org) Received: (from attilio@localhost) by svn.freebsd.org (8.14.3/8.14.3/Submit) id n878bP79008232; Mon, 7 Sep 2009 08:37:25 GMT (envelope-from attilio@svn.freebsd.org) Message-Id: <200909070837.n878bP79008232@svn.freebsd.org> From: Attilio Rao Date: Mon, 7 Sep 2009 08:37:25 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-stable@freebsd.org, svn-src-stable-7@freebsd.org X-SVN-Group: stable-7 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Cc: Subject: svn commit: r196912 - in stable/7/sys: kern sys X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 Sep 2009 08:37:25 -0000 Author: attilio Date: Mon Sep 7 08:37:25 2009 New Revision: 196912 URL: http://svn.freebsd.org/changeset/base/196912 Log: MFC r196334: Add the macro ASSERT_ATOMIC_LOAD_PTR(), enabled through INVARIANTS, which asserts for the correct alignment of datas which need to be read atomically without locks. Use that macro in locking primitives. Modified: stable/7/sys/kern/kern_mutex.c stable/7/sys/kern/kern_rwlock.c stable/7/sys/kern/kern_sx.c stable/7/sys/sys/systm.h Modified: stable/7/sys/kern/kern_mutex.c ============================================================================== --- stable/7/sys/kern/kern_mutex.c Mon Sep 7 06:37:44 2009 (r196911) +++ stable/7/sys/kern/kern_mutex.c Mon Sep 7 08:37:25 2009 (r196912) @@ -724,6 +724,9 @@ mtx_init(struct mtx *m, const char *name MPASS((opts & ~(MTX_SPIN | MTX_QUIET | MTX_RECURSE | MTX_NOWITNESS | MTX_DUPOK | MTX_NOPROFILE)) == 0); + ASSERT_ATOMIC_LOAD_PTR(m->mtx_lock, + ("%s: mtx_lock not aligned for %s:%p", __func__, name, + &m->mtx_lock)); #ifdef MUTEX_DEBUG /* Diagnostic and error correction */ Modified: stable/7/sys/kern/kern_rwlock.c ============================================================================== --- stable/7/sys/kern/kern_rwlock.c Mon Sep 7 06:37:44 2009 (r196911) +++ stable/7/sys/kern/kern_rwlock.c Mon Sep 7 08:37:25 2009 (r196912) @@ -137,6 +137,9 @@ rw_init_flags(struct rwlock *rw, const c MPASS((opts & ~(RW_DUPOK | RW_NOPROFILE | RW_NOWITNESS | RW_QUIET | RW_RECURSE)) == 0); + ASSERT_ATOMIC_LOAD_PTR(rw->rw_lock, + ("%s: rw_lock not aligned for %s:%p", __func__, name, + &rw->rw_lock)); flags = LO_UPGRADABLE | LO_RECURSABLE; if (opts & RW_DUPOK) Modified: stable/7/sys/kern/kern_sx.c ============================================================================== --- stable/7/sys/kern/kern_sx.c Mon Sep 7 06:37:44 2009 (r196911) +++ stable/7/sys/kern/kern_sx.c Mon Sep 7 08:37:25 2009 (r196912) @@ -166,6 +166,9 @@ sx_init_flags(struct sx *sx, const char MPASS((opts & ~(SX_QUIET | SX_RECURSE | SX_NOWITNESS | SX_DUPOK | SX_NOPROFILE | SX_ADAPTIVESPIN)) == 0); + ASSERT_ATOMIC_LOAD_PTR(sx->sx_lock, + ("%s: sx_lock not aligned for %s:%p", __func__, description, + &sx->sx_lock)); flags = LO_RECURSABLE | LO_SLEEPABLE | LO_UPGRADABLE; if (opts & SX_DUPOK) Modified: stable/7/sys/sys/systm.h ============================================================================== --- stable/7/sys/sys/systm.h Mon Sep 7 06:37:44 2009 (r196911) +++ stable/7/sys/sys/systm.h Mon Sep 7 08:37:25 2009 (r196912) @@ -96,6 +96,17 @@ extern int maxusers; /* system tune hin #endif /* + * Assert that a pointer can be loaded from memory atomically. + * + * This assertion enforces stronger alignment than necessary. For example, + * on some architectures, atomicity for unaligned loads will depend on + * whether or not the load spans multiple cache lines. + */ +#define ASSERT_ATOMIC_LOAD_PTR(var, msg) \ + KASSERT(sizeof(var) == sizeof(void *) && \ + ((uintptr_t)&(var) & (sizeof(void *) - 1)) == 0, msg) + +/* * XXX the hints declarations are even more misplaced than most declarations * in this file, since they are needed in one file (per arch) and only used * in two files.