Date: Fri, 16 May 2008 00:02:43 +0300 From: Andriy Gapon <avg@icyb.net.ua> To: Brent Casavant <b.j.casavant@ieee.org> Cc: freebsd-stable@freebsd.org, David Xu <davidxu@freebsd.org>, freebsd-threads@freebsd.org Subject: Re: thread scheduling at mutex unlock Message-ID: <482CA4F3.6090501@icyb.net.ua> In-Reply-To: <alpine.BSF.1.10.0805151345110.62691@pkunk.americas.sgi.com> References: <482B0297.2050300@icyb.net.ua> <482BBA77.8000704@freebsd.org> <482BF5EA.5010806@icyb.net.ua> <alpine.BSF.1.10.0805151345110.62691@pkunk.americas.sgi.com>
next in thread | previous in thread | raw e-mail | index | archive | help
on 15/05/2008 22:51 Brent Casavant said the following: > On Thu, 15 May 2008, Andriy Gapon wrote: > >> With current libthr behavior the GUI thread would never have a chance to get >> the mutex as worker thread would always be a winner (as my small program >> shows). > > The example you gave indicates an incorrect mechanism being used for the > GUI to communicate with this worker thread. For the behavior you desire, > you need a common condition that lets both the GUI and the work item > generator indicate that there is something for the worker to do, *and* > you need seperate mechanisms for the GUI and work item generator to add > to their respective queues. Brent, that was just an example. Probably a quite bad example. I should only limit myself to the program that I sent and I should repeat that the result that it produces is not what I would call reasonably expected. And I will repeat that I understand that the behavior is not prohibited by standards (well, never letting other threads to run is probably not prohibited either). > Something like this (could be made even better with a little effor): > > struct worker_queues_s { > pthread_mutex_t work_mutex; > struct work_queue_s work_queue; > > pthread_mutex_t gui_mutex; > struct gui_queue_s gui_queue; > > pthread_mutex_t stuff_mutex; > int stuff_todo; > pthread_cond_t stuff_cond; > }; > struct worker_queue_s wq; > > int > main(int argc, char *argv[]) { > // blah blah > init_worker_queue(&wq); > // blah blah > } > > void > gui_callback(...) { > // blah blah > > // Set up GUI message > > pthread_mutex_lock(&wq.gui_mutex); > // Add GUI message to queue > pthread_mutex_unlock(&wq.gui_mutex); > > pthread_mutex_lock(&wq.stuff_mutex); > wq.stuff_todo++; > pthread_cond_signal(&wq.stuff_cond); > pthread_mutex_unlock(&wq.stuff_mutex); > > // blah blah > } > > void* > work_generator_thread(void*) { > // blah blah > > while (1) { > // Set up work to do > > pthread_mutex_lock(&wq.work_mutex); > // Add work item to queue > pthread_mutex_unlock(&wq.work_mutex); > > pthread_mutex_lock(&wq.stuff_mutex); > wq.stuff_todo++; > pthread_cond_signal(&wq.stuff_cond); > pthread_mutex_unlock(&wq.stuff_mutex); > } > > // blah blah > } > > void* > worker_thread(void* arg) { > // blah blah > > while (1) { > // Wait for there to be something to do > pthread_mutex_lock(&wq.stuff_mutex); > while (wq.stuff_todo < 1) { > pthread_cond_wait(&wq.stuff_cond, > &wq.stuff_mutex); > } > pthread_mutex_unlock(&wq.stuff_mutex); > > // Handle GUI messages > pthread_mutex_lock(&wq.gui_mutex); > while (!gui_queue_empty(&wq.gui_queue) { > // dequeue and process GUI messages > pthread_mutex_lock(&wq.stuff_mutex); > wq.stuff_todo--; > pthread_mutex_unlock(&wq.stuff_mutex); > } > pthread_mutex_unlock(&wq.gui_mutex); > > // Handle work items > pthread_mutex_lock(&wq.work_mutex); > while (!work_queue_empty(&wq.work_queue)) { > // dequeue and process work item > pthread_mutex_lock(&wq.stuff_mutex); > wq.stuff_todo--; > pthread_mutex_unlock(&wq.stuff_mutex); > } > pthread_mutex_unlock(&wq.work_mutex); > } > > // blah blah > } > > This should accomplish what you desire. Caution that I haven't > compiled, run, or tested it, but I'm pretty sure it's a solid > solution. > > The key here is unifying the two input sources (the GUI and work queues) > without blocking on either one of them individually. The value of > (wq.stuff_todo < 1) becomes a proxy for the value of > (gui_queue_empty(...) && work_queue_empty(...)). > > I hope that helps, > Brent > -- Andriy Gapon
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?482CA4F3.6090501>