Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 25 Jun 2000 17:36:57 +0000 (GMT)
From:      Terry Lambert <tlambert@primenet.com>
To:        eischen@vigrid.com (Daniel Eischen)
Cc:        jasone@canonware.com (Jason Evans), smp@FreeBSD.ORG
Subject:   Re: SMP meeting summary
Message-ID:  <200006251736.KAA09884@usr02.primenet.com>
In-Reply-To: <Pine.SUN.3.91.1000625091445.2784A-100000@pcnet1.pcnet.com> from "Daniel Eischen" at Jun 25, 2000 09:58:27 AM

next in thread | previous in thread | raw e-mail | index | archive | help
> All high-level interrupts (levels 11-15, mostly PIO serial interrupts)
> in Solaris use spin mutexes and don't use an interrupt thread.  They
> execute in the context of the thread that was currently running.  All
> other interrupts below level 11 (clock, network, disk, etc) use interrupt
> threads.
> 
> A Solaris (non-spinning) mutex will only spin while the owning thread is 
> running.  Since BSDi mutexes have owners (correct me if I'm wrong), this
> seems to be better than arbitrarily spinning.

We need to learn from Dynix (Sequent's UNIX).

The main issue that block concurrency is access to shared resources.

Critical sectioning is actually better than mutex protection of
structures for maximizing concurrency, but few people appear to be
willing to go down this road, since it requires flatening the call
graph for much of the kernel to ensure that locks are held and
released at the same call level, so that stack unwinding is not
needed to permit preemption.

Dynix had no problem with 32 processors.  Most SVR4 variants, and
I will include Solaris in this, use mutex protection of structures,
and start to fall down drastically over 4 processors.

The main reason Dynix did not have this scaling issue is that it
dealt with the shared resource issue by placing most objects into
per-processor allocation/deallocation pools.  These pools were
filled/drained from/to system pools.  Lock contention was only
necessary when the pools needed filling/draining, or when an object
was being migrated between CPUs.


Similarly, one can consider that the idea of CPU reentrancy into
the kernel is identical in all but inter-CPU synchronization to
the idea of kernel preemption.

It would perhaps be a good idea from this standpoint to adopt the
realtime code recently donated to the OpenBSD project, since the
issues involved in making a kernel RT are similar to those of
ensuring SMP kernel reentrancy without blocking on resource
contention.

> Mutexes are only used in Solaris when they will be held for very small
> amounts of time.  Read/write locks and semaphores are used for all
> other instances.  While we are modifying the kernel to add mutexes,
> it would probably be worthwhile to comment those sections of code
> that could hold mutexes for something other than a very short period
> of time.  Or even use a different naming convention for those mutexes.

Anything that can hold a mutex for other than a very short time will
need to go away.  This is one of the problems with data protection
rather than critical sectioning.

Reader/writer locks are an obvious optimization, if one is to use
mutex protection of data.  Another similar optimization is intention
mode locking.  The Soft Updates dependency flooding problem that is
associated with an update being commited to the update clock list,
and someone else needing to access it (the poor ZD Labs benchmark
results were in part traced to this), is one place where intention
mode locks would be useful in increasing concurrency.

Search altavista for "+intention +lock +SIX" to find the relevent
literature.


					Terry Lambert
					terry@lambert.org
---
Any opinions in this posting are my own and not those of my present
or previous employers.


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-smp" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200006251736.KAA09884>