Date: Mon, 11 Oct 2004 14:40:08 -0400 From: John Baldwin <jhb@FreeBSD.org> To: Robert Watson <rwatson@FreeBSD.org> Cc: cvs-all@FreeBSD.org Subject: Re: cvs commit: src/sys/dev/random randomdev_soft.c Message-ID: <200410111440.08998.jhb@FreeBSD.org> In-Reply-To: <Pine.NEB.3.96L.1041009180630.14314A-100000@fledge.watson.org> References: <Pine.NEB.3.96L.1041009180630.14314A-100000@fledge.watson.org>
next in thread | previous in thread | raw e-mail | index | archive | help
On Saturday 09 October 2004 06:07 pm, Robert Watson wrote: > On Sat, 9 Oct 2004, Robert Watson wrote: > > - When reaping harvested entries from the queue, move all entries from > > the queue at once, and when done with them, insert them all into a > > thread-local queue for processing; then insert them all into the > > empty fifo at once. This reduces O(4n) mutex operations to O(2) > > mutex operations per wakeup. > > Right now we do a slightly harmful O(2N) walk of the records to move them > to a thread-local queue. It would be very nice to have an O(1) tailq (and > other list) "move them all" operation to move all entries to a local head. > This would also be useful when dealing with other work queues in worker > threads, such as bio queues, etc. Like TAILQ_CONCAT? :) I use that in the turnstile code to do such an O(1) move: /* * Put all blocked threads on the pending list. This must be called with * the turnstile chain locked. */ void turnstile_broadcast(struct turnstile *ts) { ... /* * Transfer the blocked list to the pending list. */ mtx_lock_spin(&td_contested_lock); TAILQ_CONCAT(&ts->ts_pending, &ts->ts_blocked, td_lockq); mtx_unlock_spin(&td_contested_lock); ... } /* * Wakeup all threads on the pending list and adjust the priority of the * current thread appropriately. This must be called with the turnstile * chain locked. */ void turnstile_unpend(struct turnstile *ts) { TAILQ_HEAD( ,thread) pending_threads; ... /* * Move the list of pending threads out of the turnstile and * into a local variable. */ TAILQ_INIT(&pending_threads); TAILQ_CONCAT(&pending_threads, &ts->ts_pending, td_lockq); ... /* * Wake up all the pending threads. If a thread is not blocked * on a lock, then it is currently executing on another CPU in * turnstile_wait() or sitting on a run queue waiting to resume * in turnstile_wait(). Set a flag to force it to try to acquire * the lock again instead of blocking. */ while (!TAILQ_EMPTY(&pending_threads)) { td = TAILQ_FIRST(&pending_threads); TAILQ_REMOVE(&pending_threads, td, td_lockq); MPASS(td->td_proc->p_magic == P_MAGIC); if (TD_ON_LOCK(td)) { td->td_blocked = NULL; td->td_lockname = NULL; TD_CLR_LOCK(td); MPASS(TD_CAN_RUN(td)); setrunqueue(td, SRQ_BORING); } else { td->td_flags |= TDF_TSNOBLOCK; MPASS(TD_IS_RUNNING(td) || TD_ON_RUNQ(td)); } } ... } -- John Baldwin <jhb@FreeBSD.org> <>< http://www.FreeBSD.org/~jhb/ "Power Users Use the Power to Serve" = http://www.FreeBSD.org
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200410111440.08998.jhb>