Date: Wed, 9 Jul 2003 14:36:41 -0700 (PDT) From: Julian Elischer <julian@elischer.org> To: Petri Helenius <pete@he.iki.fi> Cc: freebsd-threads@freebsd.org Subject: Re: thread scheduling priority with libkse Message-ID: <Pine.BSF.4.21.0307091436210.22588-100000@InterJet.elischer.org> In-Reply-To: <3F0A7425.9080300@he.iki.fi>
next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, 8 Jul 2003, Petri Helenius wrote: > Daniel Eischen wrote: > > >The current thread. As I said before, if there are idle KSEs, then > >one is woken to run the newly runnable thread. > > > I'm seeing about 200 microsecond latency when scheduling the thread on > the other > KSE. Which translates to maximum of 2500 "spins" of the contested loop > a second. > As I said in othe rmail.. try setting machdep.cpu_idle_hlt to 0 > Same code runs about 500000 spins a second when no locking is involved. > > This is on othervise idle Dual 2.4 Xeon. > > If the mutex performance sounds about right, I need to redesign my > locking to > go from one contested lock to many uncontested ones, which sounds a good > idea > anyway. > > >It waits until either you hit a blocking condition or the > >quantum expires. The library is not (yet) smart enough > >to switch out the current thread after the unlock if the > >new owner has a higher priority. We could do that, but > >if there are other KSEs that can run the new thread, then > >they should get it. > > > > > > > But the library is smart enough to extend the quantum on a higher > priority thread if SCHED_RR is in effect? I'm seeing multiples > of 20ms being allocated with two runnable threads of priorities > 15 and 20 competing for the CPU. > > Pete > > > _______________________________________________ > freebsd-threads@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-threads > To unsubscribe, send any mail to "freebsd-threads-unsubscribe@freebsd.org" >
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.BSF.4.21.0307091436210.22588-100000>
