Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 20 May 2007 16:07:53 -0700 (PDT)
From:      Jeff Roberson <jroberson@chesapeake.net>
To:        arch@freebsd.org
Subject:   sched_lock && thread_lock()
Message-ID:  <20070520155103.K632@10.0.0.1>

next in thread | raw e-mail | index | archive | help
Attilio and I have been working on addressing the increasing problem of 
sched_lock contention on -CURRENT.  Attilio has been addressing the parts 
of the kernel which do not need to fall under the scheduler lock and 
moving them into seperate locks.  For example, the ldt/gdt lock and clock 
lock which were committed earlier.  Also, using atomics for the vmcnt 
structure.

I have been working on an approach to using thread locks rather than a 
global scheduler lock.  The design is similar to Solaris's container 
locks, but the details are different.  The basic idea is to have a pointer 
in the thread structure that points at a spinlock that protects the 
thread.  This spinlock may be one of the scheduler lock, a turnstile lock, 
or a sleep queue lock.  As the thread changes state from running to 
blocked on a lock or sleeping the lock changes with it.

This has several advantages.  The majority of the kernel simply calls 
thread_lock() which figures out the details.  The kernel then knows 
nothing of the particulars of the scheduler locks, and the schedulers are 
free to implement them in any way that they like.  Furthermore, in some 
cases the locking is reduced, because locking the thread has the side 
effect of locking the container.

This patch does not implement per-cpu scheduler locks.  It just changes 
the kernel to support this model.  I have a fork of ULE in development 
that runs with per-cpu locks, but it is not ready yet.  This means that 
there should be very little change in system performance until the 
scheduler catches up.  In fact, on a 2cpu system the difference is 
immeasurable or almost so on every workload I have tested.  On an 8way 
opteron system the results vary between +10% on some reasonable workloads 
and -15% on super-smack, which has some inherent problems that I believe 
are not exposing real performance problems with this patch.

This has also been tested extensively by Kris and myself on a variety of 
machines and I believe it to be fairly solid.  The only thing remaining to 
do is fix rusage so that it does not rely on a global scheduler lock.

I am posting the patch here in case anyone with specific knowledge of 
turnstiles, sleepqueues, or signals would like to review it, and as a 
general heads up to people interested in where the kernel is headed.

This will apply to current just prior to my kern_clock.c commits.  I will 
re-merge and update again in the next few days, probably after we sort out 
rusage.

http://people.freebsd.org/~jeff/threadlock.diff

Thanks,
Jeff



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20070520155103.K632>