Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 29 Sep 2009 14:25:45 -0400
From:      John Baldwin <jhb@freebsd.org>
To:        Attilio Rao <attilio@freebsd.org>
Cc:        hackers@freebsd.org, Fabio Checconi <fabio@freebsd.org>
Subject:   Re: sx locks and memory barriers
Message-ID:  <200909291425.46134.jhb@freebsd.org>
In-Reply-To: <3bbf2fe10909290839w305c85c3t1532bd7733c39a6a@mail.gmail.com>
References:  <20090924224935.GW473@gandalf.sssup.it> <3bbf2fe10909290839w305c85c3t1532bd7733c39a6a@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Tuesday 29 September 2009 11:39:37 am Attilio Rao wrote:
> 2009/9/25 Fabio Checconi <fabio@freebsd.org>:
> > Hi all,
> >  looking at sys/sx.h I have some troubles understanding this comment:
> >
> >  * A note about memory barriers.  Exclusive locks need to use the same
> >  * memory barriers as mutexes: _acq when acquiring an exclusive lock
> >  * and _rel when releasing an exclusive lock.  On the other side,
> >  * shared lock needs to use an _acq barrier when acquiring the lock
> >  * but, since they don't update any locked data, no memory barrier is
> >  * needed when releasing a shared lock.
> >
> > In particular, I'm not understanding what prevents the following sequence
> > from happening:
> >
> > CPU A                                   CPU B
> >
> > sx_slock(&data->lock);
> >
> > sx_sunlock(&data->lock);
> >
> > /* reordered after the unlock
> >   by the cpu */
> > if (data->buffer)
> >                                        sx_xlock(&data->lock);
> >                                        free(data->buffer);
> >                                        data->buffer = NULL;
> >                                        sx_xunlock(&data->lock);
> >
> >        a = *data->buffer;
> >
> > IOW, even if readers do not modify the data protected by the lock,
> > without a release barrier a memory access may leak past the unlock (as
> > the cpu won't notice any dependency between the unlock and the fetch,
> > feeling free to reorder them), thus potentially racing with an exclusive
> > writer accessing the data.
> >
> > On architectures where atomic ops serialize memory accesses this would
> > never happen, otherwise the sequence above seems possible; am I missing
> > something?
> 
> I think your concerns are right, possibly we need this patch:
> http://www.freebsd.org/~attilio/sxrw_unlockb.diff

Actually, since you are only worried about reads, I think this should be 
an "acq" barrier rather than a "rel".  In some cases "acq" is cheaper, so we 
should prefer the cheapest barrier that provides what we need.  You would 
still need to keep some language about the memory barriers since using "acq" 
for shared unlocking is different from exclusive unlocking.

I can't recall why I thought this was ok originally, sadly my p4 logs didn't 
include the reasoning either. :-/

> However speaking with John we agreed possibly there is a more serious 
breakage.
> Possibly, memory barriers would also require to ensure the compiler to
> not reorder the operation, while right now, in FreeBSD, they just take
> care of the reordering from the architecture perspective.
> The only way I'm aware of GCC offers that is to clobber memory.
> I will provide a patch that address this soon, hoping that GCC will be
> smart enough to not overhead too much the memory clobbering but just
> try to understand what's our purpose and servers it (I will try to
> compare code generated before and after the patch at least for tier-1
> architectures).

The memory clobber is quite heavyweight.  It actually forces gcc to forget any 
cached memory items in registers and reload everything again.  What I really 
want is just a barrier to tell GCC to not reorder things.  If I read a value 
in the program before acquiring a lock it is in theory fine to keep that 
cached across the barrier.  However, there isn't a way to do this sort of 
thing with GCC currently.

-- 
John Baldwin



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200909291425.46134.jhb>