From owner-freebsd-hackers@FreeBSD.ORG Thu Apr 17 13:44:56 2008 Return-Path: Delivered-To: freebsd-hackers@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 7AFEC1065678 for ; Thu, 17 Apr 2008 13:44:56 +0000 (UTC) (envelope-from jhb@freebsd.org) Received: from elvis.mu.org (elvis.mu.org [192.203.228.196]) by mx1.freebsd.org (Postfix) with ESMTP id 70F338FC26 for ; Thu, 17 Apr 2008 13:44:55 +0000 (UTC) (envelope-from jhb@freebsd.org) Received: from zion.baldwin.cx (unknown [208.65.89.154]) by elvis.mu.org (Postfix) with ESMTP id 4B27C1A4D84; Thu, 17 Apr 2008 06:44:55 -0700 (PDT) From: John Baldwin To: freebsd-hackers@freebsd.org Date: Thu, 17 Apr 2008 09:29:56 -0400 User-Agent: KMail/1.9.7 References: In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200804170929.57192.jhb@freebsd.org> Cc: "Murty, Ravi" Subject: Re: md_spinlock_count? X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 17 Apr 2008 13:44:56 -0000 On Wednesday 16 April 2008 06:32:20 pm Murty, Ravi wrote: > Hello All, > > > > I was looking at the code that creates a new process (fork) with a > single thread coming out on the other side. I can't figure out a couple > of things. > > > > 1. Why is the md_spinlock_count for the new thread set to 1 and not > to 0. This happens in cpu_fork and cpu_set_upcall under the amd64 tree. Threads begin life during a context switch and during a context switch you always own a spin lock that is explicitly handed off to you during mi_switch(). The new threads actually start life in fork_trampoline() which calls fork_exit(). fork_exit() starts off by dropping the spin lock, so we need to set the thread to start out as if it was holding a spin lock. > 2. If this was the "per-cpu" idle thread and the AP was booting up > (running init_secondary) why does it grab sched_lock and call > spinlock_exit. It would seem simpler to set the count of the idle thread > to 0 and not have to call spinlock_exit. The only answer I can come up > with is the fact that a non-zero spinlock_count prevents interrupts from > getting disabled/renabled to some unknown value? First, you need the lock to enter the scheduler (when it calls cpu_throw() or sched_throw()) so you can start executing tasks. We also don't want to enable interrupts until we have entered the scheduler and are fully up and running. The code in HEAD and other branches has a big comment to explain the extra spinlock_exit() and it is related to md_spinlock_count being 1 for new threads as explained above as CPUs don't start up in fork_trampoline() but use a different code path and so need to account for the md_spinlock_count = 1 differently. Here is the comment in sched_throw() in sched_4bsd.c on HEAD: /* * Correct spinlock nesting. The idle thread context that we are * borrowing was created so that it would start out with a single * spin lock (sched_lock) held in fork_trampoline(). Since we've * explicitly acquired locks in this function, the nesting count * is now 2 rather than 1. Since we are nested, calling * spinlock_exit() will simply adjust the counts without allowing * spin lock using code to interrupt us. */ -- John Baldwin