Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 21 Jan 2010 11:27:23 +0100
From:      Bernard van Gastel <bvgastel@bitpowder.com>
To:        Dan Nelson <dnelson@allantgroup.com>
Cc:        freebsd-hackers@freebsd.org
Subject:   Re: pthread_{mutex,cond} & fifo/starvation/scheduling policy
Message-ID:  <1B4E9B02-AA63-45BF-9BB7-3B0A2884CCB0@bitpowder.com>
In-Reply-To: <20100119184617.GB50360@dan.emsphone.com>
References:  <71A129DC-68A0-46C3-956D-C8AFF1BA29E1@bitpowder.com> <20100119184617.GB50360@dan.emsphone.com>

next in thread | previous in thread | raw e-mail | index | archive | help

--Apple-Mail-2-147118908
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=us-ascii

In real world application such a proposed queue would work almost =
always, but I'm trying to exclude all starvation situations primarily =
(speed is less relevant). And although such a worker can execute it work =
and be scheduled fairly, the addition of the work to the queue can =
result in starvation (one of the threads trying to add to the queue =
could stall forever if the lock is heavily contested).

Is this possible with POSIX thread stuff? Or is the only option to use =
IPC like message queues for this?

Regards,
	Bernard

Op 19 jan 2010, om 19:46 heeft Dan Nelson het volgende geschreven:

> In the last episode (Jan 19), Bernard van Gastel said:
>> I'm curious to the exact scheduling policy of POSIX threads in =
relation to
>> mutexes and conditions.  If there are two threads (a & b), both with =
the
>> following code:
>>=20
>> while (1) {
>> 	pthread_mutex_lock(mutex);
>> 	...
>> 	pthread_mutex_unlock(mutex);
>> }
>>=20
>> What is the scheduling policy of the different thread libraries? Are =
both
>> threads getting an equal amount of time?  Are there no starvation =
issues
>> (are they executed in alternating turns)?  (a test program of mine
>> indicates that libpthread and libthr both have starvation issues, in
>> contrary to Mac OS X 10.6)
>=20
> There's no guarantee of fairness when dealing with mutexes afaik.  My =
guess
> is that if thread "a" unlocks the mutex and still has time left in its
> quantum, it'll be able to grab it again without even going to the =
kernel.=20
> =46rom the POSIX docs on mutexes:
>=20
>  =
http://www.opengroup.org/onlinepubs/9699919799/functions/pthread_mutex_loc=
k.html#tag_16_439_08
>=20
>  "Mutex objects are intended to serve as a low-level primitive from =
which
>   other thread synchronization functions can be built.  As such, the
>   implementation of mutexes should be as efficient as possible, and =
this
>   has ramifications on the features available at the interface.
>=20
>   The mutex functions and the particular default settings of the mutex
>   attributes have been motivated by the desire to not preclude fast,
>   inlined implementations of mutex locking and unlocking."
>=20
> The idea being that mutexes should be held for as little time as =
possible.=20
> Is there a way to write your code so that most of the CPU-consuming =
activity
> is done outside of the mutex?  Perhaps use a job queue of some sort, =
and
> only lock the mutex when pushing/popping elements.  Then worker =
processes
> can run without holding the mutex, and will be fairly scheduled by the
> kernel.
>=20
> --=20
> 	Dan Nelson
> 	dnelson@allantgroup.com


--Apple-Mail-2-147118908--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1B4E9B02-AA63-45BF-9BB7-3B0A2884CCB0>