From owner-freebsd-hackers@FreeBSD.ORG Thu Jan 21 10:27:36 2010 Return-Path: Delivered-To: freebsd-hackers@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 42F221065676 for ; Thu, 21 Jan 2010 10:27:36 +0000 (UTC) (envelope-from bvgastel@bitpowder.com) Received: from smeltpunt.science.ru.nl (smeltpunt.science.ru.nl [131.174.16.145]) by mx1.freebsd.org (Postfix) with ESMTP id E81A68FC18 for ; Thu, 21 Jan 2010 10:27:35 +0000 (UTC) Received: from [10.0.1.2] (shadowfax.bitpowder.com [145.99.244.126]) (authen=bernardg) by smeltpunt.science.ru.nl (8.13.7/5.31) with ESMTP id o0LAROGM019844; Thu, 21 Jan 2010 11:27:24 +0100 (MET) Mime-Version: 1.0 (Apple Message framework v1077) Content-Type: multipart/signed; boundary=Apple-Mail-2-147118908; protocol="application/pkcs7-signature"; micalg=sha1 From: Bernard van Gastel In-Reply-To: <20100119184617.GB50360@dan.emsphone.com> Date: Thu, 21 Jan 2010 11:27:23 +0100 Message-Id: <1B4E9B02-AA63-45BF-9BB7-3B0A2884CCB0@bitpowder.com> References: <71A129DC-68A0-46C3-956D-C8AFF1BA29E1@bitpowder.com> <20100119184617.GB50360@dan.emsphone.com> To: Dan Nelson X-Mailer: Apple Mail (2.1077) X-Spam-Score: 0.001 () BAYES_50 X-Scanned-By: MIMEDefang 2.63 on 131.174.16.145 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-hackers@freebsd.org Subject: Re: pthread_{mutex,cond} & fifo/starvation/scheduling policy X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 21 Jan 2010 10:27:36 -0000 --Apple-Mail-2-147118908 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=us-ascii In real world application such a proposed queue would work almost = always, but I'm trying to exclude all starvation situations primarily = (speed is less relevant). And although such a worker can execute it work = and be scheduled fairly, the addition of the work to the queue can = result in starvation (one of the threads trying to add to the queue = could stall forever if the lock is heavily contested). Is this possible with POSIX thread stuff? Or is the only option to use = IPC like message queues for this? Regards, Bernard Op 19 jan 2010, om 19:46 heeft Dan Nelson het volgende geschreven: > In the last episode (Jan 19), Bernard van Gastel said: >> I'm curious to the exact scheduling policy of POSIX threads in = relation to >> mutexes and conditions. If there are two threads (a & b), both with = the >> following code: >>=20 >> while (1) { >> pthread_mutex_lock(mutex); >> ... >> pthread_mutex_unlock(mutex); >> } >>=20 >> What is the scheduling policy of the different thread libraries? Are = both >> threads getting an equal amount of time? Are there no starvation = issues >> (are they executed in alternating turns)? (a test program of mine >> indicates that libpthread and libthr both have starvation issues, in >> contrary to Mac OS X 10.6) >=20 > There's no guarantee of fairness when dealing with mutexes afaik. My = guess > is that if thread "a" unlocks the mutex and still has time left in its > quantum, it'll be able to grab it again without even going to the = kernel.=20 > =46rom the POSIX docs on mutexes: >=20 > = http://www.opengroup.org/onlinepubs/9699919799/functions/pthread_mutex_loc= k.html#tag_16_439_08 >=20 > "Mutex objects are intended to serve as a low-level primitive from = which > other thread synchronization functions can be built. As such, the > implementation of mutexes should be as efficient as possible, and = this > has ramifications on the features available at the interface. >=20 > The mutex functions and the particular default settings of the mutex > attributes have been motivated by the desire to not preclude fast, > inlined implementations of mutex locking and unlocking." >=20 > The idea being that mutexes should be held for as little time as = possible.=20 > Is there a way to write your code so that most of the CPU-consuming = activity > is done outside of the mutex? Perhaps use a job queue of some sort, = and > only lock the mutex when pushing/popping elements. Then worker = processes > can run without holding the mutex, and will be fairly scheduled by the > kernel. >=20 > --=20 > Dan Nelson > dnelson@allantgroup.com --Apple-Mail-2-147118908--