From owner-freebsd-stable@FreeBSD.ORG Thu May 15 17:54:12 2008 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C715E1065675; Thu, 15 May 2008 17:54:12 +0000 (UTC) (envelope-from deischen@freebsd.org) Received: from mail.netplex.net (mail.netplex.net [204.213.176.10]) by mx1.freebsd.org (Postfix) with ESMTP id 315A58FC13; Thu, 15 May 2008 17:54:12 +0000 (UTC) (envelope-from deischen@freebsd.org) Received: from sea.ntplx.net (sea.ntplx.net [204.213.176.11]) by mail.netplex.net (8.14.3/8.14.3/NETPLEX) with ESMTP id m4FHs9OF006254; Thu, 15 May 2008 13:54:10 -0400 (EDT) X-Virus-Scanned: by AMaViS and Clam AntiVirus (mail.netplex.net) X-Greylist: Message whitelisted by DRAC access database, not delayed by milter-greylist-4.0 (mail.netplex.net [204.213.176.10]); Thu, 15 May 2008 13:54:11 -0400 (EDT) Date: Thu, 15 May 2008 13:54:09 -0400 (EDT) From: Daniel Eischen X-X-Sender: eischen@sea.ntplx.net To: Andriy Gapon In-Reply-To: Message-ID: References: <482B0297.2050300@icyb.net.ua> <482BBA77.8000704@freebsd.org> <482BF5EA.5010806@icyb.net.ua> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: freebsd-stable@freebsd.org, David Xu , Brent Casavant , freebsd-threads@freebsd.org Subject: Re: thread scheduling at mutex unlock X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Daniel Eischen List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 May 2008 17:54:12 -0000 On Thu, 15 May 2008, Daniel Eischen wrote: > On Thu, 15 May 2008, Andriy Gapon wrote: > >> Or even more realistic: there should be a feeder thread that puts things on >> the queue, it would never be able to enqueue new items until the queue >> becomes empty if worker thread's code looks like the following: >> >> while(1) >> { >> pthread_mutex_lock(&work_mutex); >> while(queue.is_empty()) >> pthread_cond_wait(...); >> //dequeue item >> ... >> pthread_mutex_unlock(&work mutex); >> //perform some short and non-blocking processing of the item >> ... >> } >> >> Because the worker thread (while the queue is not empty) would never enter >> cond_wait and would always re-lock the mutex shortly after unlocking it. > > Well in theory, the kernel scheduler will let both threads run fairly > with regards to their cpu usage, so this should even out the enqueueing > and dequeueing threads. > > You could also optimize the above a little bit by dequeueing everything > in the queue instead of one at a time. I suppose you could also enforce your own scheduling with something like the following: pthread_cond_t writer_cv; pthread_cond_t reader_cv; pthread_mutex_t q_mutex; ... thingy_q_t thingy_q; int writers_waiting = 0; int readers_waiting = 0; ... void enqueue(thingy_t *thingy) { pthread_mutex_lock(q_mutex); /* Insert into thingy q */ ... if (readers_waiting > 0) { pthread_cond_broadcast(&reader_cv, &q_mutex); readers_waiting = 0; } while (thingy_q.size > ENQUEUE_THRESHOLD_HIGH) { writers_waiting++; pthread_cond_wait(&writer_cv, &q_mutex); } pthread_mutex_unlock(&q_mutex); } thingy_t * dequeue(void) { thingy_t *thingy; pthread_mutex_lock(&q_mutex); while (thingy_q.size == 0) { readers_waiting++; pthread_cond_wait(&reader_cv, &q_mutex); } /* Dequeue thingy */ ... if ((writers_waiting > 0) && thingy_q.size < ENQUEUE_THRESHOLD_LOW)) { /* Wakeup the writers. */ pthread_cond_broadcast(&writer_cv, &q_mutex); writers_waiting = 0; } pthread_mutex_unlock(&q_mutex); return (thingy); } The above is completely untested and probably contains some bugs ;-) You probably shouldn't need anything like that if the kernel scheduler is scheduling your threads fairly. -- DE