Date: Tue, 12 Dec 2000 19:44:36 -0800 From: Alfred Perlstein <bright@wintelcom.net> To: Jason Evans <jasone@canonware.com> Cc: Bosko Milekic <bmilekic@technokratis.com>, net@FreeBSD.ORG, jhb@FreeBSD.ORG Subject: Re: MEXT_IS_REF broken. Message-ID: <20001212194436.R16205@fw.wintelcom.net> In-Reply-To: <20001212183930.L2312@canonware.com>; from jasone@canonware.com on Tue, Dec 12, 2000 at 06:39:30PM -0800 References: <20001211014837.W16205@fw.wintelcom.net> <Pine.BSF.4.21.0012111223350.21769-100000@jehovah.technokratis.com> <20001212014429.Y16205@fw.wintelcom.net> <20001212015059.Z16205@fw.wintelcom.net> <20001212143214.H2312@canonware.com> <20001212175937.M16205@fw.wintelcom.net> <20001212183930.L2312@canonware.com>
next in thread | previous in thread | raw e-mail | index | archive | help
* Jason Evans <jasone@canonware.com> [001212 18:39] wrote:
> On Tue, Dec 12, 2000 at 05:59:37PM -0800, Alfred Perlstein wrote:
> > * Jason Evans <jasone@canonware.com> [001212 14:32] wrote:
> > > A safe version:
> > > ---------------------------------------------------------------------------
> > > #define MEXTFREE(m) do { \
> > > struct mbuf *_mmm = (m); \
> > > \
> > > MEXT_REM_REF(_mmm); \
> > > if (!atomic_cmpset_long(&_mmm->m_ext.ref_cnt->refcnt, 0, 1)) { \
> > > /* \
> > > * Do not free; there are still references, or another \
> > > * thread is freeing. \
> > > */ \
> > > } else if (_mmm->m_ext.ext_type != EXT_CLUSTER) { \
> > > (*(_mmm->m_ext.ext_free))(_mmm->m_ext.ext_buf, \
> > > _mmm->m_ext.ext_args); \
> > > _MEXT_DEALLOC_CNT(_mmm->m_ext.ref_cnt); \
> > > } else { \
> > > _MCLFREE(_mmm->m_ext.ext_buf); \
> > > _MEXT_DEALLOC_CNT(_mmm->m_ext.ref_cnt); \
> > > } \
> > > _mmm->m_flags &= ~M_EXT; \
> > > } while (0)
> > > ---------------------------------------------------------------------------
> > >
> > > What this does is have all threads unconditionally atomically decrement.
> > > Then, in order to determine whether to clean up, the threads to an atomic
> > > compare and set on the refcount, so that if it is 0, only one thread
> > > manages to modify the refcount, which then signifies that it will do the
> > > cleanup.
> >
> > You've just proved why we need atomic ops that aren't so clumsy.
> >
> > Your solution doesn't work, the code is broken.
> >
> > You can't use atomic_cmpset_long to do a counter like that, it's
> > the same race, you're just seeing if it goes to zero, now if it
> > doesn't how do you decrement it? Use atomic_subtract_long? no,
> > same race as the old code.
>
> Please read my code and explanation again. You seem to have missed the
> order of arguments to atomic_cmpset_long(). The cmpset call detects
> whether the refcount is already 0, and only allows one thread to increment
> it; that thread then does the cleanup.
Ok, can you please commit the fix then?
--
-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]
"I have the heart of a child; I keep it in a jar on my desk."
To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-net" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20001212194436.R16205>
