From owner-freebsd-threads@FreeBSD.ORG Thu May 15 22:23:55 2008 Return-Path: Delivered-To: freebsd-threads@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A5DF7106566B; Thu, 15 May 2008 22:23:55 +0000 (UTC) (envelope-from b.j.casavant@ieee.org) Received: from yeppers.tdkt.org (skyline.tdkt.org [209.98.211.67]) by mx1.freebsd.org (Postfix) with ESMTP id 029878FC26; Thu, 15 May 2008 22:23:54 +0000 (UTC) (envelope-from b.j.casavant@ieee.org) Received: from pkunk.americas.sgi.com (cfcafwp.sgi.com [192.48.179.6]) (authenticated bits=0) by yeppers.tdkt.org (8.12.11/8.12.11/erikj-OpenBSD) with ESMTP id m4FMNpLt018645; Thu, 15 May 2008 17:23:52 -0500 (CDT) Date: Thu, 15 May 2008 17:23:45 -0500 (CDT) From: Brent Casavant X-X-Sender: bcasavan@pkunk.americas.sgi.com To: Andriy Gapon In-Reply-To: <482CA4F3.6090501@icyb.net.ua> Message-ID: References: <482B0297.2050300@icyb.net.ua> <482BBA77.8000704@freebsd.org> <482BF5EA.5010806@icyb.net.ua> <482CA4F3.6090501@icyb.net.ua> User-Agent: Alpine 1.10 (BSF 962 2008-03-14) Organization: "Angeltread Software Organization" MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Cc: freebsd-stable@freebsd.org, freebsd-threads@freebsd.org Subject: Re: thread scheduling at mutex unlock X-BeenThere: freebsd-threads@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Brent Casavant List-Id: Threading on FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 May 2008 22:23:55 -0000 On Fri, 16 May 2008, Andriy Gapon wrote: > that was just an example. Probably a quite bad example. > I should only limit myself to the program that I sent and I should repeat that > the result that it produces is not what I would call reasonably expected. And > I will repeat that I understand that the behavior is not prohibited by > standards (well, never letting other threads to run is probably not prohibited > either). Well, I don't know what to tell you at this point. I believe I understand the nature of the problem you're encountering, and I believe there are perfectly workable mechanisms in Pthreads to allow you to accomplish what you desire without depending on implementation-specific details. Yes, it's more work on your part, but if done well it's one-time work. The behavior you desire is useful only in limited situations, and can be implemented at the application level through the use of Pthreads primitives. If Pthreads behaved as you apparently expect, it would be impossible to implement the current behavior at the application level. Queueing mutexes are innappropriate in the majority of code designs. I'll take your word that it is appropriate in your particular case, but that does not make it appropriate for more typical designs. Several solutions have been presented, including one from me. If you choose not to implement such solutions, then best of luck to you. OK, I'm a sucker for punishment. So use this instead of Pthreads mutexes. This should work on both FreeBSD and Linux (FreeBSD has some convenience routines in the sys/queue.h package that Linux doesn't): #include #include struct thread_queue_entry_s { TAILQ_ENTRY(thread_queue_entry_s) tqe_list; pthread_cond_t tqe_cond; pthread_mutex_t tqe_mutex; int tqe_wakeup; }; TAILQ_HEAD(thread_queue_s, thread_queue_entry_s); typedef struct { struct thread_queue_s qm_queue; pthread_mutex_t qm_queue_lock; unsigned int qm_users; } queued_mutex_t; int queued_mutex_init(queued_mutex_t *qm) { TAILQ_INIT(&qm->qm_queue); qm->qm_users = 0; return pthread_mutex_init(&qm->qm_queue_lock, NULL); } int queued_mutex_lock(queued_mutex_t *qm) { struct thread_queue_entry_s waiter; pthread_mutex_lock(&qm->qm_queue_lock); qm->qm_users++; if (1 == qm->qm_users) { /* Nobody was waiting for mutex, we own it. * Fast path out. */ pthread_mutex_unlock(&qm->qm_queue_lock); return 0; } /* There are others waiting for the mutex. Slow path. */ /* Initialize this thread's wait structure */ pthread_cond_init(&waiter->tqe_cond, NULL); pthread_mutex_init(&waiter->tqe_mutex, NULL); pthread_mutex_lock(&waiter->tqe_mutex); waiter->tqe_wakeup = 0; /* Add this thread's wait structure to queue */ TAILQ_INSERT_TAIL(&qm->qm_queue, &waiter, tqe_list); pthread_mutex_unlock(&qm->qm_queue_lock); /* Wait for somebody to hand the mutex to us */ while (!waiter->tqe_wakeup) { pthread_cond_wait(&waiter->tqe_cond, &waiter->tqe_mutex); } /* Destroy this thread's wait structure */ pthread_mutex_unlock(&waiter->tqe_mutex); pthread_mutex_destroy(&waiter->tqe_mutex); pthread_cond_destroy(&waiter->tqe_cond); /* We own the queued mutex (handed to us by unlock) */ return 0; } int queued_mutex_unlock(queued_mutex_t *qm) { struct thread_queue_entry_s *waiter; pthread_mutex_lock(&qm->qm_queue_lock); qm->qm_users--; if (0 == qm->qm_users) { /* No waiters to wake up. Fast path out. */ pthread_mutex_unlock(&qm->qm_queue_lock); return 0; } /* Wake up first waiter. Slow path. */ /* Remove the first waiting thread. */ waiter = qm->qm_queue.tqh_first; TAILQ_REMOVE(&qm->qm_queue, waiter, tqe_list); pthread_mutex_unlock(&qm->qm_queue_lock); /* Wake up the thread. */ pthread_mutex_lock(&waiter->tqe_mutex); waiter->tqe_wakeup = 1; pthread_cond_signal(&waiter->tqe_cond); pthread_mutex_unlock(&waiter->tqe_mutex); return 0; } int queued_mutex_destroy(queued_mutex_t *qm) { pthread_mutex_lock(&qm->qm_queue_lock); if (qm->qm_users > 1) { pthread_mutex_unlock(&qm->qm_queue_lock); return EBUSY; } return pthread_mutex_destroy(&qm->qm_queue_lock); } These queued_mutex_t mutexes should have the behavior you're looking for, and will be portable to any platform with Pthreads and sys/queue.h. Be warned that I haven't compiled, run, or debugged this, but the code should be pretty solid (typos aside). Of course, in production code I'd check a bunch of return values, but those would just get in the way of this illustration. So use something this or change the application's threading model (like my previous post showed). There's no use complaining about the Pthreads implementation in this regard because your application's use of mutexes is the exception, not the rule. The fact that Linux behaves as you expect is irrelevant, as POSIX doesn't speak to this facet of implementation, so both Linux and BSD are correct. Relying on this behavior in Linux is ill-advised as it is non-portable, and likely to break in future releases. Brent -- Brent Casavant Dance like everybody should be watching. www.angeltread.org KD5EMB, EN34lv