From owner-freebsd-arch@FreeBSD.ORG Fri Sep 13 04:10:05 2013 Return-Path: Delivered-To: freebsd-arch@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 59E323E2 for ; Fri, 13 Sep 2013 04:10:05 +0000 (UTC) (envelope-from julian@freebsd.org) Received: from vps1.elischer.org (vps1.elischer.org [204.109.63.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 1E2D123FD for ; Fri, 13 Sep 2013 04:10:04 +0000 (UTC) Received: from Julian-MBP3.local (ppp121-45-245-177.lns20.per2.internode.on.net [121.45.245.177]) (authenticated bits=0) by vps1.elischer.org (8.14.6/8.14.6) with ESMTP id r8D4A0XR085727 (version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO); Thu, 12 Sep 2013 21:10:02 -0700 (PDT) (envelope-from julian@freebsd.org) Message-ID: <52329012.2050408@freebsd.org> Date: Fri, 13 Sep 2013 12:09:54 +0800 From: Julian Elischer User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:17.0) Gecko/20130801 Thunderbird/17.0.8 MIME-Version: 1.0 To: Dheeraj Kandula Subject: Re: Why do we need to acquire the current thread's lock before context switching? References: <201309120824.52916.jhb@freebsd.org> In-Reply-To: Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 8bit Cc: "freebsd-arch@freebsd.org" X-BeenThere: freebsd-arch@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Discussion related to FreeBSD architecture List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 13 Sep 2013 04:10:05 -0000 On 9/13/13 4:44 AM, Dheeraj Kandula wrote: > # svn diff > Index: sys/sys/proc.h > =================================================================== > --- sys/sys/proc.h (revision 255488) > +++ sys/sys/proc.h (working copy) > @@ -197,12 +197,44 @@ > }; > > /* > + * Comments by: Svatopluk Kraus & John Baldwin > + * > + * Svatopluk Kraus' comment: > + * Think about td_lock like something what is lent by current thread > owner. If > + * a thread is running, it's owned by scheduler and td_lock points > + * to scheduler lock. If a thread is sleeping, it's owned by sleeping queue > + * and td_lock points to sleep queue lock. If a thread is contested, it's > + * owned by turnstile queue and td_lock points to turnstile queue lock. > And so > + * on. This way an owner can work with owned threads safely without giant > + * lock. The td_lock pointer is changed atomically, so it's safe. > + * > + * John Baldwin's comment: > + * For example: take a thread that is asleep on a sleep > + * queue. td_lock points to the relevant SC_LOCK() for the sleep queue > chain > + * in that case, so any other thread that wants to examine that thread's > + * state ends up locking the sleep queue while it examines that thread. In > + * particular, the thread that is doing a wakeup() can resume all of the > + * sleeping threads for a wait channel by holding the one SC_LOCK() for > that > + * wait channel since that will be td_lock for all those threads. > + * > + * In general mutexes are only unlocked by the thread that locks them, > + * and the td_lock of the old thread is unlocked during sched_switch(). > + * However, the old thread has to grab td_lock of the new thread during > + * sched_switch() and then hand it off to the new thread when it resumes. > + * This is why sched_throw() and sched_switch() in ULE directly assign > + * 'mtx_lock' of the run queue lock before calling cpu_throw() or > + * cpu_switch(). That gives the effect that the new thread resumes while > + * holding the lock pinted to by its td_lock. > + */ > +/* > * Kernel runnable context (thread). > * This is what is put to sleep and reactivated. > * Thread context. Processes may have multiple threads. > */ > struct thread { > - struct mtx *volatile td_lock; /* replaces sched lock */ > + struct mtx *volatile td_lock; /* replaces sched lock. Look at the comment > + * above for further details. > + */ > struct proc *td_proc; /* (*) Associated process. */ > TAILQ_ENTRY(thread) td_plist; /* (*) All threads in this proc. */ > TAILQ_ENTRY(thread) td_runq; /* (t) Run queue. */ > > > > On Thu, Sep 12, 2013 at 4:21 PM, Alfred Perlstein wrote: > >> Both these explanations are so great. Is there any way we can add this to >> proc.h or maybe document somewhere and then link to it from proc.h? >> >> Sent from my iPhone >> >> On Sep 12, 2013, at 5:24 AM, John Baldwin wrote: >> >>> On Thursday, September 12, 2013 7:16:20 am Dheeraj Kandula wrote: >>>> Thanks a lot Svatopluk for the clarification. Right after I replied to >>>> Alfred's mail, I realized that it can't be thread specific lock as it >>>> should also protect the scheduler variables. So if I understand it >> right, >>>> even though it is a mutex, it can be unlocked by another thread which is >>>> usually not the case with regular mutexes as the thread that locks it >> must >>>> unlock it unlike a binary semaphore. Isn't it? >>> It's less complicated than that. :) It is a mutex, but to expand on what >>> Svatopluk said with an example: take a thread that is asleep on a sleep >>> queue. td_lock points to the relevant SC_LOCK() for the sleep queue >> chain >>> in that case, so any other thread that wants to examine that thread's >>> state ends up locking the sleep queue while it examines that thread. In >>> particular, the thread that is doing a wakeup() can resume all of the >>> sleeping threads for a wait channel by holding the one SC_LOCK() for that >>> wait channel since that will be td_lock for all those threads. >>> >>> In general mutexes are only unlocked by the thread that locks them, >>> and the td_lock of the old thread is unlocked during sched_switch(). >>> However, the old thread has to grab td_lock of the new thread during >>> sched_switch() and then hand it off to the new thread when it resumes. >>> This is why sched_throw() and sched_switch() in ULE directly assign >>> 'mtx_lock' of the run queue lock before calling cpu_throw() or >>> cpu_switch(). That gives the effect that the new thread resumes while >>> holding the lock pinted to by its td_lock. ^^ typo.. fix before commit >>> >>>> Dheeraj >>>> >>>> >>>> On Thu, Sep 12, 2013 at 7:04 AM, Svatopluk Kraus >> wrote: >>>>> Think about td_lock like something what is lent by current thread >> owner. >>>>> If a thread is running, it's owned by scheduler and td_lock points >>>>> to scheduler lock. If a thread is sleeping, it's owned by sleeping >> queue >>>>> and td_lock points to sleep queue lock. If a thread is contested, it's >>>>> owned by turnstile queue and td_lock points to turnstile queue lock. >> And so >>>>> on. This way an owner can work with owned threads safely without giant >>>>> lock. The td_lock pointer is changed atomically, so it's safe. >>>>> >>>>> Svatopluk Kraus >>>>> >>>>> On Thu, Sep 12, 2013 at 12:48 PM, Dheeraj Kandula >> wrote: >>>>>> Thanks a lot Alfred for the clarification. So is the td_lock granular >> i.e. >>>>>> one separate lock for each thread but also used for protecting the >>>>>> scheduler variables or is it just one lock used by all threads and the >>>>>> scheduler as well. I will anyway go through the code that you >> suggested >>>>>> but >>>>>> just wanted to have a deeper understanding before I go about hunting >> in >>>>>> the >>>>>> code. >>>>>> >>>>>> Dheeraj >>>>>> >>>>>> >>>>>> On Thu, Sep 12, 2013 at 3:10 AM, Alfred Perlstein >> wrote: >>>>>>> On 9/11/13 2:39 PM, Dheeraj Kandula wrote: >>>>>>> >>>>>>>> Hey All, >>>>>>>> >>>>>>>> When the current thread is being context switched with a newly >> selected >>>>>>>> thread, why is the current thread's lock acquired before context >>>>>> switch – >>>>>>>> mi_switch() is invoked after thread_lock(td) is called. A thread at >> any >>>>>>>> time runs only on one of the cores of a CPU. Hence when it is being >>>>>>>> context >>>>>>>> switched it is added either to the real time runq or the timeshare >>>>>> runq or >>>>>>>> the idle runq with the lock still held or it is added to the sleep >>>>>> queue >>>>>>>> or >>>>>>>> the blocked queue. So this happens atomically even without the lock. >>>>>> Isn't >>>>>>>> it? Am I missing something here? I don't see any contention for the >>>>>> thread >>>>>>>> in order to demand a lock for the thread which will basically >> protect >>>>>> the >>>>>>>> contents of the thread structure for the thread. >>>>>>>> >>>>>>>> Dheeraj >>>>>>> The thread lock also happens to protect various scheduler variables: >>>>>>> >>>>>>> struct mtx *volatile td_lock; /* replaces sched lock */ >>>>>>> >>>>>>> see sys/kern/sched_ule.c on how the thread lock td_lock is changed >>>>>>> depending on what the thread is doing. >>>>>>> >>>>>>> -- >>>>>>> Alfred Perlstein >>>>>> _______________________________________________ >>>>>> freebsd-arch@freebsd.org mailing list >>>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-arch >>>>>> To unsubscribe, send any mail to " >> freebsd-arch-unsubscribe@freebsd.org" >>>> _______________________________________________ >>>> freebsd-arch@freebsd.org mailing list >>>> http://lists.freebsd.org/mailman/listinfo/freebsd-arch >>>> To unsubscribe, send any mail to "freebsd-arch-unsubscribe@freebsd.org" >>> -- >>> John Baldwin >>> _______________________________________________ >>> freebsd-arch@freebsd.org mailing list >>> http://lists.freebsd.org/mailman/listinfo/freebsd-arch >>> To unsubscribe, send any mail to "freebsd-arch-unsubscribe@freebsd.org" >>> > _______________________________________________ > freebsd-arch@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-arch > To unsubscribe, send any mail to "freebsd-arch-unsubscribe@freebsd.org" > > >