Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 14 Dec 1999 00:16:12 +0000 (GMT)
From:      Terry Lambert <tlambert@primenet.com>
To:        adsharma@sharmas.dhs.org (Arun Sharma)
Cc:        chuckr@picnic.mat.net, nate@mt.sri.com, freebsd-arch@freebsd.org
Subject:   Re: Thread scheduling
Message-ID:  <199912140016.RAA26390@usr08.primenet.com>
In-Reply-To: <19991210201522.A4535@sharmas.dhs.org> from "Arun Sharma" at Dec 10, 99 08:15:22 pm

next in thread | previous in thread | raw e-mail | index | archive | help
> > I wasn't suggesting a *single* thread across multiple processors (as I
> > think Arun asked).  Yes, that would be silly.  Is what I asked also silly,
> > as a scheduling bias, not a guarantee or a requirement?  Or would it make
> > no real difference?
> 
> This is also called gang scheduling in OS literature. From what I remember,
> there are two advantages to this - both of them mainly applicable  to
> timesharing systems
> 
> 	- Minimize the average wait time
> 
> 		This assumes that the related threads compete for some
> 		resources and when thread A releases a resource, it makes
> 		sense to schedule thread B, which was waiting on the
> 		resource to resume immediately, without further delay

This is a thread group affinity issue, having to do less with
thread relations than it has to do with your scheduler.  One of
the primary reasons you want to use a user space threads scheduler
as part of your architecture, even if you have "pure" kernel threads,
like Linux does, is that it allows you to sidestep the affinity
question and the starvation and deadly embrace deadlocks that come
with trying to put thread group affinity into your kernel scheduler.
Solving this in a kernel scheduler is NP-hard.

It also allows you to gracefully resolve otherwise obstinate priority
inversion issues by doing priority lending in user space, which is
the natural place for it, given resource contention is most likely
to be inter-thread and intra-process for a threaded application.  In
order to solve this in the kernel, you'd need signifcant information
that wouldn't be available, unless you turned all of your mutex and
semaphore operations into system calls (very expensive).


> 	- When your memory cache is warm with the data from the swap
> 
> 		If your system is swapping under load, it makes sense
> 		to run the threads together to completion, rather than
> 		swapping data in and out repeatedly.

The value of doing this is significantly reduced by the L2 cache
being shared and much (clock multiples) slower than the L1 cache.


> However, such systems are not very common these days. Any performance
> engineer will tell you that a system that swaps will not perform. And
> no one cares about statistics like average wait time. You can buy a
> fast CPU for < $100.

Yes, swapping is your biggest cost.  I have a friend who refers to
swappable main memory as "L3 cache".

As a general note on memory, check the specifcations for AltaVista,
some time 8-).


> So I don't see any particular advantage to doing this on a system which
> will most probably be used as a database server or a web server.

I don't see any advantage to it at all, unless you were to implement
a deadlining scheduler for hard real time, and you knew beforehand
that your threads didn't share any resource contention domains.  A
very tall order.


					Terry Lambert
					terry@lambert.org
---
Any opinions in this posting are my own and not those of my present
or previous employers.




To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-arch" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199912140016.RAA26390>