Date: Tue, 12 Dec 2000 17:59:37 -0800 From: Alfred Perlstein <bright@wintelcom.net> To: Jason Evans <jasone@canonware.com> Cc: Bosko Milekic <bmilekic@technokratis.com>, net@FreeBSD.ORG, jhb@FreeBSD.ORG Subject: Re: MEXT_IS_REF broken. Message-ID: <20001212175937.M16205@fw.wintelcom.net> In-Reply-To: <20001212143214.H2312@canonware.com>; from jasone@canonware.com on Tue, Dec 12, 2000 at 02:32:15PM -0800 References: <20001211014837.W16205@fw.wintelcom.net> <Pine.BSF.4.21.0012111223350.21769-100000@jehovah.technokratis.com> <20001212014429.Y16205@fw.wintelcom.net> <20001212015059.Z16205@fw.wintelcom.net> <20001212143214.H2312@canonware.com>
next in thread | previous in thread | raw e-mail | index | archive | help
* Jason Evans <jasone@canonware.com> [001212 14:32] wrote: > On Tue, Dec 12, 2000 at 01:51:00AM -0800, Alfred Perlstein wrote: > > A safe version: > --------------------------------------------------------------------------- > #define MEXTFREE(m) do { \ > struct mbuf *_mmm = (m); \ > \ > MEXT_REM_REF(_mmm); \ > if (!atomic_cmpset_long(&_mmm->m_ext.ref_cnt->refcnt, 0, 1)) { \ > /* \ > * Do not free; there are still references, or another \ > * thread is freeing. \ > */ \ > } else if (_mmm->m_ext.ext_type != EXT_CLUSTER) { \ > (*(_mmm->m_ext.ext_free))(_mmm->m_ext.ext_buf, \ > _mmm->m_ext.ext_args); \ > _MEXT_DEALLOC_CNT(_mmm->m_ext.ref_cnt); \ > } else { \ > _MCLFREE(_mmm->m_ext.ext_buf); \ > _MEXT_DEALLOC_CNT(_mmm->m_ext.ref_cnt); \ > } \ > _mmm->m_flags &= ~M_EXT; \ > } while (0) > --------------------------------------------------------------------------- > > What this does is have all threads unconditionally atomically decrement. > Then, in order to determine whether to clean up, the threads to an atomic > compare and set on the refcount, so that if it is 0, only one thread > manages to modify the refcount, which then signifies that it will do the > cleanup. > > It looks like this should work to me, without the need for an atomic > refcount type. Yes, it requires two atomic operations, but it avoids the > need for a mutex, which seems to be your main concern. You've just proved why we need atomic ops that aren't so clumsy. Your solution doesn't work, the code is broken. You can't use atomic_cmpset_long to do a counter like that, it's the same race, you're just seeing if it goes to zero, now if it doesn't how do you decrement it? Use atomic_subtract_long? no, same race as the old code. Now can you _please_ give John the go-ahead to commit this so I can fix it and use it whereever else it's needed? Actually, in truth I think you can get the code right like so: long x = _mmm->m_ext.ref_cnt->refcnt; while (!atomic_cmpset_long(&_mmm->m_ext.ref_cnt->refcnt, x - 1, x)) ; But that's just gross, expensive and shouldn't be needed. The reason why you need 'x' is that you can't do this: while (!atomic_cmpset_long(&_mmm->m_ext.ref_cnt->refcnt, _mmm->m_ext.ref_cnt->refcnt - 1, _mmm->m_ext.ref_cnt->refcnt)) ; Because you don't know if there's a race that may modify the value between pushing it on the stack as the second and third parameter. And since Chuck backs me up on this, can we consider this discussion over as soon as John commits his code? thanks, -- -Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org] "I have the heart of a child; I keep it in a jar on my desk." To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-net" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20001212175937.M16205>