Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 13 Nov 2010 16:10:36 +0100
From:      Jilles Tjoelker <jilles@stack.nl>
To:        David Xu <davidxu@FreeBSD.org>
Cc:        src-committers@freebsd.org, svn-src-user@freebsd.org
Subject:   Re: svn commit: r214915 - user/davidxu/libthr/lib/libthr/thread
Message-ID:  <20101113151035.GB79975@stack.nl>
In-Reply-To: <201011071349.oA7Dn8Po048543@svn.freebsd.org>
References:  <201011071349.oA7Dn8Po048543@svn.freebsd.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sun, Nov 07, 2010 at 01:49:08PM +0000, David Xu wrote:
> Author: davidxu
> Date: Sun Nov  7 13:49:08 2010
> New Revision: 214915
> URL: http://svn.freebsd.org/changeset/base/214915

> Log:
>   Implement robust mutex, the pthread_mutex locking and
>   unlocking code are reworked to support robust mutex and
>   other mutex must be locked and unlocked by kernel.

The glibc+linux implementation avoids the system call for each robust
mutex lock/unlock by maintaining the list in userland and providing a
pointer to the kernel. Although this is somewhat less reliable in case a
process scribbles over this list, it has better performance.

There are various ways this list could be maintained, but the glibc way
uses an "in progress" field per thread and a linked list using a field
in the pthread_mutex_t, so if we want that we should make sure we have
the space in the pthread_mutex_t. Alternatively, a simple array could be
used if the number of owned robust mutexes can be limited to a fairly
low value.

Solaris robust mutexes used to work by entering the kernel for every
lock/unlock, but no longer, see
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6296770
Someone complained about that implementation being too slow.

-- 
Jilles Tjoelker



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20101113151035.GB79975>