Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 10 Mar 2008 11:17:12 -1000 (HST)
From:      Jeff Roberson <jroberson@chesapeake.net>
To:        Julian Elischer <julian@ironport.com>
Cc:        Stephan Uphoff <ups@freebsd.org>, FreeBSD Current <current@freebsd.org>
Subject:   Re: critical_exit()
Message-ID:  <20080310111256.N1091@desktop>
In-Reply-To: <47D5727A.7000504@ironport.com>
References:  <47D4D534.9050902@ironport.com> <47D543C5.9050008@freebsd.org> <47D5727A.7000504@ironport.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, 10 Mar 2008, Julian Elischer wrote:

> Stephan Uphoff wrote:
>> Julian Elischer wrote:
>>> 
>>> Why would the following:
>>> void
>>> critical_exit(void)
>>> {
>>>        struct thread *td;
>>>
>>>        td = curthread;
>>>        KASSERT(td->td_critnest != 0,
>>>            ("critical_exit: td_critnest == 0"));
>>>
>>>        if (td->td_critnest == 1) {
>>>                td->td_critnest = 0;
>>>                if (td->td_owepreempt) {
>>>                        td->td_critnest = 1;
>>>                        thread_lock(td);
>>>                        td->td_critnest--;
>>>                        SCHED_STAT_INC(switch_owepreempt);
>>>                        mi_switch(SW_INVOL|SW_PREEMPT, NULL);
>>>                        thread_unlock(td);
>>>                }
>>>        } else
>>>                td->td_critnest--;
>>>
>>>        CTR4(KTR_CRITICAL, "critical_exit by thread %p (%ld, %s) to %d", 
>>> td,
>>>            (long)td->td_proc->p_pid, td->td_name, td->td_critnest);
>>> }
>>> 
>>> 
>>> not be expressed:
>>> 
>>> void
>>> critical_exit(void)
>>> {
>>>        struct thread *td;
>>>
>>>        td = curthread;
>>>        KASSERT(td->td_critnest != 0,
>>>            ("critical_exit: td_critnest == 0"));
>>>
>>>        if (td->td_critnest == 1) {
>>>                if (td->td_owepreempt) {
>>>                        thread_lock(td);
>>>                        td->td_critnest = 0;
>>>                        SCHED_STAT_INC(switch_owepreempt);
>>>                        mi_switch(SW_INVOL|SW_PREEMPT, NULL);
>>>                        thread_unlock(td);
>>>                } else {
>> XXXXX If preemption happens here td_owepreempt will be set to preempt the 
>> current thread
>> XXXXX since td_critnest != 0 . However  td_owepreempt is not checked again 
>> so we will not
>> XXXXX preempt on td_critnest =  0;
>
> jeff's comment was that it could be expresssed as:
>
> if (--(td->td_critnest) == 0) {
>             if (td->td_owepreempt) {
>                      thread_lock(td);
>                      td->td_critnest = 0;
>                      SCHED_STAT_INC(switch_owepreempt);
>                      mi_switch(SW_INVOL|SW_PREEMPT, NULL);
>                      thread_unlock(td);
>             }
> }

if (--(td->td_critnest) == 0) {
             if (td->td_owepreempt) {
                      thread_lock(td);
 		     if (td->td_owepreempt) {
                      	    SCHED_STAT_INC(switch_owepreempt);
                      	    mi_switch(SW_INVOL|SW_PREEMPT, NULL);
 		     }
                      thread_unlock(td);
             }
}

Wouldn't that do just fine?  If a preemption occurred before you disabled 
interrupts in thread_lock you'd skip the switch after acquiring it.

I don't see a way that we could miss owepreempt with the above code.  I'd 
also like to make critical_enter/critical_exit inlines with a 
_critical_exit() that does the switch.  with thread_lock we now do a lot 
more nested spinlocking etc.  All of these non-contiguous instruction 
pointers add up.

>
> This has the same race.. but as you say, it probably doesn't matter.
> In fact the race is probably required to ensure that pre-emption Does occur 
> one way or another.
>
>
>
>
>>>                        td_critnest =  0;
>>>                }
>>>        } else
>>>                td->td_critnest--;
>>>
>>>        CTR4(KTR_CRITICAL, "critical_exit by thread %p (%ld, %s) to %d", 
>>> td,
>>>            (long)td->td_proc->p_pid, td->td_name, td->td_critnest);
>>> }
>>> 
>>> It seems to me there is a race in the current version, where the
>>> critical count is temporarily 0, where the thread could be pre-empted
>>> when it shouldn't be..
>> 
>> Yes - there is a race where the thread could be preempted twice.
>> However this is fairly harmless in comparison to not being preempted at 
>> all.
>> This being said it may be worthwhile to see if that race can be fixed  now 
>> after
>> the thread lock changes.
>> 
>>> 
>>> (prompted by a comment by jeffr that made me go look at this code)..
>>> 
>> 
>> Stephan
>
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20080310111256.N1091>