From owner-freebsd-arch@FreeBSD.ORG Tue Aug 24 14:29:16 2010 Return-Path: Delivered-To: freebsd-arch@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id AB5C71065679 for ; Tue, 24 Aug 2010 14:29:16 +0000 (UTC) (envelope-from ups@freebsd.org) Received: from smtpauth16.prod.mesa1.secureserver.net (smtpauth16.prod.mesa1.secureserver.net [64.202.165.22]) by mx1.freebsd.org (Postfix) with SMTP id 75B3C8FC25 for ; Tue, 24 Aug 2010 14:29:16 +0000 (UTC) Received: (qmail 13014 invoked from network); 24 Aug 2010 14:01:43 -0000 Received: from unknown (75.139.142.171) by smtpauth16.prod.mesa1.secureserver.net (64.202.165.22) with ESMTP; 24 Aug 2010 14:01:43 -0000 Message-ID: <4C73D0FA.5030102@freebsd.org> Date: Tue, 24 Aug 2010 10:02:34 -0400 From: Stephan Uphoff User-Agent: Thunderbird 2.0.0.24 (Macintosh/20100228) MIME-Version: 1.0 To: Max Laier References: <201008160515.21412.max@love2party.net> <4C7042BA.8000402@freebsd.org> <201008222105.47276.max@love2party.net> <201008240045.15998.max@laiers.net> In-Reply-To: <201008240045.15998.max@laiers.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-arch@freebsd.org Subject: Re: rmlock(9) two additions X-BeenThere: freebsd-arch@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussion related to FreeBSD architecture List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 24 Aug 2010 14:29:16 -0000 Max Laier wrote: > On Sunday 22 August 2010 21:05:47 Max Laier wrote: > >> On Saturday 21 August 2010 23:18:50 Stephan Uphoff wrote: >> >>> Max Laier wrote: >>> ... >>> The rm_try_rlock() will never succeed when the lock was last locked as >>> writer. >>> Maybe add: >>> >>> void >>> _rm_wunlock(struct rmlock *rm) >>> { >>> + rm->rm_noreadtoken = 0; >>> >>> mtx_unlock(&rm->rm_lock); >>> >>> } >>> >>> But then >>> >>> _rm_wlock(struct rmlock *rm) >>> >>> always needs to use IPIs - even when the lock was used last as a write >>> lock. >>> >> I don't think this is a big problem - I can't see many use cases for >> rmlocks where you'd routinely see repeated wlocks without rlocks between >> them. However, I think there should be a release memory barrier >> before/while clearing rm_noreadtoken, otherwise readers may not see the >> data writes that are supposed to be protected by the lock?!? >> >>> ... >>> >> I believe either solution will work. #1 is a bit more in the spirit of the >> rmlock - i.e. make the read case cheap and the write case expensive. I'm >> just not sure about the lock semantics. >> >> I guess a >> >> atomic_store_rel_int(&rm->rm_noreadtoken, 0); >> >> should work. >> > > thinking about this for a while makes me wonder: Are readers really guaranteed > to see all the updates of a writer - even in the current version? > > Example: > > writer thread: > rm_wlock(); // lock mtx, IPI, wait for reader drain > modify_data(); > rm_wunlock(); // unlock mtx (this does a atomic_*_rel) > > reader thread #1: > // failed to get the lock, spinning/waiting on mtx > mtx_lock(); // this does a atomic_*_acq -> this CPU sees the new data > rm->rm_noreadtoken = 0; // now new readers can enter quickly > ... > > reader thread 2# (on a different CPU than reader #1): > // enters rm_rlock() "after" rm_noreadtoken was reset -> no memory barrier > // does this thread see the modifications? > > I realize this is a somewhat pathological case, but would it be possible in > theory? Or is the compiler_memory_barrier() actually enough? > > Otherwise, I think we need an IPI on rm_wunlock() that does a atomic_*_acq on > every CPU. > > Thoughts? > Max > > Yes - this is a problem that needs to be addressed. Fortunately most platforms won't need to be as strict and I suggest per platform parameters. An alternative that was in my original design was to use a bitmap for the rm_noreadtoken. Each CPU would then have an associated bit that will only be cleared by that cpu. This would also allow targeted IPIs to only the token holders. Stephan