From owner-freebsd-arch@FreeBSD.ORG Thu Apr 8 15:33:23 2004 Return-Path: Delivered-To: freebsd-arch@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id D6E5116A4CE; Thu, 8 Apr 2004 15:33:23 -0700 (PDT) Received: from fledge.watson.org (fledge.watson.org [204.156.12.50]) by mx1.FreeBSD.org (Postfix) with ESMTP id 5242343D5A; Thu, 8 Apr 2004 15:33:23 -0700 (PDT) (envelope-from robert@fledge.watson.org) Received: from fledge.watson.org (localhost [127.0.0.1]) by fledge.watson.org (8.12.10/8.12.10) with ESMTP id i38MX0Pq054206; Thu, 8 Apr 2004 18:33:00 -0400 (EDT) (envelope-from robert@fledge.watson.org) Received: from localhost (robert@localhost)i38MX0MJ054203; Thu, 8 Apr 2004 18:33:00 -0400 (EDT) (envelope-from robert@fledge.watson.org) Date: Thu, 8 Apr 2004 18:33:00 -0400 (EDT) From: Robert Watson X-Sender: robert@fledge.watson.org To: Pawel Jakub Dawidek In-Reply-To: <20040408213647.GL661@darkness.comp.waw.pl> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII cc: freebsd-arch@FreeBSD.org Subject: Re: mtx_lock_recurse/mtx_unlock_recurse functions (proof-of-concept). X-BeenThere: freebsd-arch@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Discussion related to FreeBSD architecture List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 08 Apr 2004 22:33:24 -0000 On Thu, 8 Apr 2004, Pawel Jakub Dawidek wrote: > As was discussed, it will be helpful to have functions, that are able to > acquire lock recursively, even if lock itself isn't recursable. > > Here is a patch, that should implement this functionality: > > http://people.freebsd.org/~pjd/patches/mtx_lock_recurse.patch > > I also added a KASSERT() to protect against mixed locking modes, e.g.: As you know, this is of interest to me because several situations have come up in the network stack locking code where we've worked around locking issues by conditionally acquiring a lock based on whether it's not already held. For example, in the rwatson_netperf branch, we conditionally acquire the socket buffer lock in the KQueue filter because we don't know if we're being called from a context that has already acquired the mutex: @@ -1875,8 +1915,11 @@ filt_sowrite(struct knote *kn, long hint) { struct socket *so = kn->kn_fp->f_data; - int result; + int needlock, result; + needlock = !SOCKBUF_OWNED(&so->so_snd); + if (needlock) + SOCKBUF_LOCK(&so->so_snd); kn->kn_data = sbspace(&so->so_snd); if (so->so_state & SS_CANTSENDMORE) { kn->kn_flags |= EV_EOF; @@ -1891,6 +1934,8 @@ result = (kn->kn_data >= kn->kn_sdata); else result = (kn->kn_data >= so->so_snd.sb_lowat); + if (needlock) + SOCKBUF_UNLOCK(&so->so_snd); return (result); } This stems directly from the fact that in general we wanted to avoid recursion on socket buffer mutexes, but that in one case due to existing implementation constraints, we don't know what state the lock will be in. In this case, that happens because invocation of filt_sowrite() sometimes occurs directly from the KQueue code to poll a filter, and sometimes from the socket code during a wakeup that invokes KNOTE(). There are arguably at least two ways in which the current code might be modified to try and address this: (0) Use recursive mutexes so that this recursion is permitted. (1) Use separate APIs to enter the per-object filters so that the origins of the invocation (and hence locking state) are more clear. (2) Avoid holding existing locks over calls into KNOTE(), which goes off and does a lot of stuff. Neither of these is 100% desirable, and I guess I'd prefer (1) over (2) since (2) conflicts with the use of locks as a form of reference counting. However, the easy answer (0) is also undesirable because the socket code generally doesn't need recursion of the socket buffer mutex. Given the ability to selectively say "Ok, recursion here is fine" we would change the above conditional locking into such a case. Robert N M Watson FreeBSD Core Team, TrustedBSD Projects robert@fledge.watson.org Senior Research Scientist, McAfee Research