Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 17 Apr 2008 09:29:56 -0400
From:      John Baldwin <jhb@freebsd.org>
To:        freebsd-hackers@freebsd.org
Cc:        "Murty, Ravi" <ravi.murty@intel.com>
Subject:   Re: md_spinlock_count?
Message-ID:  <200804170929.57192.jhb@freebsd.org>
In-Reply-To: <AEBCFC23C0E40949B10BA2C224FC61B006F81861@orsmsx416.amr.corp.intel.com>
References:  <AEBCFC23C0E40949B10BA2C224FC61B006F81861@orsmsx416.amr.corp.intel.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Wednesday 16 April 2008 06:32:20 pm Murty, Ravi wrote:
> Hello All,
>
>
>
> I was looking at the code that creates a new process (fork) with a
> single thread coming out on the other side. I can't figure out a couple
> of things.
>
>
>
> 1.	Why is the md_spinlock_count for the new thread set to 1 and not
> to 0. This happens in cpu_fork and cpu_set_upcall under the amd64 tree.

Threads begin life during a context switch and during a context switch you 
always own a spin lock that is explicitly handed off to you during 
mi_switch().  The new threads actually start life in fork_trampoline() which 
calls fork_exit().  fork_exit() starts off by dropping the spin lock, so we 
need to set the thread to start out as if it was holding a spin lock.

> 2.	If this was the "per-cpu" idle thread and the AP was booting up
> (running init_secondary) why does it grab sched_lock and call
> spinlock_exit. It would seem simpler to set the count of the idle thread
> to 0 and not have to call spinlock_exit. The only answer I can come up
> with is the fact that a non-zero spinlock_count prevents interrupts from
> getting disabled/renabled to some unknown value?

First, you need the lock to enter the scheduler (when it calls cpu_throw() or 
sched_throw()) so you can start executing tasks.  We also don't want to 
enable interrupts until we have entered the scheduler and are fully up and 
running.  The code in HEAD and other branches has a big comment to explain 
the extra spinlock_exit() and it is related to md_spinlock_count being 1 for 
new threads as explained above as CPUs don't start up in fork_trampoline() 
but use a different code path and so need to account for the 
md_spinlock_count = 1 differently.  Here is the comment in sched_throw() in 
sched_4bsd.c on HEAD:

	/*
	 * Correct spinlock nesting.  The idle thread context that we are
	 * borrowing was created so that it would start out with a single
	 * spin lock (sched_lock) held in fork_trampoline().  Since we've
	 * explicitly acquired locks in this function, the nesting count
	 * is now 2 rather than 1.  Since we are nested, calling
	 * spinlock_exit() will simply adjust the counts without allowing
	 * spin lock using code to interrupt us.
	 */

-- 
John Baldwin



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200804170929.57192.jhb>