Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 8 Aug 2010 16:43:52 +0200
From:      Attilio Rao <attilio@freebsd.org>
To:        mdf@freebsd.org
Cc:        freebsd-hackers@freebsd.org
Subject:   Re: sched_pin() versus PCPU_GET
Message-ID:  <AANLkTimHfXm3ap-MAMiQZ4i5F0XWdHKLqBz5RPUWoyt9@mail.gmail.com>
In-Reply-To: <AANLkTinp7278ZD1L8s616seQET=OQBx1RZ4eHx=e%2BpD5@mail.gmail.com>
References:  <AANLkTikY20TxyeyqO5zP3zC-azb748kV-MdevPfm%2B8cq@mail.gmail.com> <201007301008.22501.jhb@freebsd.org> <201007301031.34266.jhb@freebsd.org> <AANLkTinp7278ZD1L8s616seQET=OQBx1RZ4eHx=e%2BpD5@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
2010/8/4  <mdf@freebsd.org>:
> On Fri, Jul 30, 2010 at 2:31 PM, John Baldwin <jhb@freebsd.org> wrote:
>> On Friday, July 30, 2010 10:08:22 am John Baldwin wrote:
>>> On Thursday, July 29, 2010 7:39:02 pm mdf@freebsd.org wrote:
>>> > We've seen a few instances at work where witness_warn() in ast()
>>> > indicates the sched lock is still held, but the place it claims it wa=
s
>>> > held by is in fact sometimes not possible to keep the lock, like:
>>> >
>>> > =C2=A0 =C2=A0 thread_lock(td);
>>> > =C2=A0 =C2=A0 td->td_flags &=3D ~TDF_SELECT;
>>> > =C2=A0 =C2=A0 thread_unlock(td);
>>> >
>>> > What I was wondering is, even though the assembly I see in objdump -S
>>> > for witness_warn has the increment of td_pinned before the PCPU_GET:
>>> >
>>> > ffffffff802db210: =C2=A0 65 48 8b 1c 25 00 00 =C2=A0 =C2=A0mov =C2=A0=
 =C2=A0%gs:0x0,%rbx
>>> > ffffffff802db217: =C2=A0 00 00
>>> > ffffffff802db219: =C2=A0 ff 83 04 01 00 00 =C2=A0 =C2=A0 =C2=A0 incl =
=C2=A0 0x104(%rbx)
>>> > =C2=A0 =C2=A0 =C2=A0* Pin the thread in order to avoid problems with =
thread migration.
>>> > =C2=A0 =C2=A0 =C2=A0* Once that all verifies are passed about spinloc=
ks ownership,
>>> > =C2=A0 =C2=A0 =C2=A0* the thread is in a safe path and it can be unpi=
nned.
>>> > =C2=A0 =C2=A0 =C2=A0*/
>>> > =C2=A0 =C2=A0 sched_pin();
>>> > =C2=A0 =C2=A0 lock_list =3D PCPU_GET(spinlocks);
>>> > ffffffff802db21f: =C2=A0 65 48 8b 04 25 48 00 =C2=A0 =C2=A0mov =C2=A0=
 =C2=A0%gs:0x48,%rax
>>> > ffffffff802db226: =C2=A0 00 00
>>> > =C2=A0 =C2=A0 if (lock_list !=3D NULL && lock_list->ll_count !=3D 0) =
{
>>> > ffffffff802db228: =C2=A0 48 85 c0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0test =C2=A0 %rax,%rax
>>> > =C2=A0 =C2=A0 =C2=A0* Pin the thread in order to avoid problems with =
thread migration.
>>> > =C2=A0 =C2=A0 =C2=A0* Once that all verifies are passed about spinloc=
ks ownership,
>>> > =C2=A0 =C2=A0 =C2=A0* the thread is in a safe path and it can be unpi=
nned.
>>> > =C2=A0 =C2=A0 =C2=A0*/
>>> > =C2=A0 =C2=A0 sched_pin();
>>> > =C2=A0 =C2=A0 lock_list =3D PCPU_GET(spinlocks);
>>> > ffffffff802db22b: =C2=A0 48 89 85 f0 fe ff ff =C2=A0 =C2=A0mov =C2=A0=
 =C2=A0%rax,-0x110(%rbp)
>>> > ffffffff802db232: =C2=A0 48 89 85 f8 fe ff ff =C2=A0 =C2=A0mov =C2=A0=
 =C2=A0%rax,-0x108(%rbp)
>>> > =C2=A0 =C2=A0 if (lock_list !=3D NULL && lock_list->ll_count !=3D 0) =
{
>>> > ffffffff802db239: =C2=A0 0f 84 ff 00 00 00 =C2=A0 =C2=A0 =C2=A0 je =
=C2=A0 =C2=A0 ffffffff802db33e
>>> > <witness_warn+0x30e>
>>> > ffffffff802db23f: =C2=A0 44 8b 60 50 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 mov =C2=A0 =C2=A00x50(%rax),%r12d
>>> >
>>> > is it possible for the hardware to do any re-ordering here?
>>> >
>>> > The reason I'm suspicious is not just that the code doesn't have a
>>> > lock leak at the indicated point, but in one instance I can see in th=
e
>>> > dump that the lock_list local from witness_warn is from the pcpu
>>> > structure for CPU 0 (and I was warned about sched lock 0), but the
>>> > thread id in panic_cpu is 2. =C2=A0So clearly the thread was being mi=
grated
>>> > right around panic time.
>>> >
>>> > This is the amd64 kernel on stable/7. =C2=A0I'm not sure exactly what=
 kind
>>> > of hardware; it's a 4-way Intel chip from about 3 or 4 years ago IIRC=
.
>>> >
>>> > So... do we need some kind of barrier in the code for sched_pin() for
>>> > it to really do what it claims? =C2=A0Could the hardware have re-orde=
red
>>> > the "mov =C2=A0 =C2=A0%gs:0x48,%rax" PCPU_GET to before the sched_pin=
()
>>> > increment?
>>>
>>> Hmmm, I think it might be able to because they refer to different locat=
ions.
>>>
>>> Note this rule in section 8.2.2 of Volume 3A:
>>>
>>> =C2=A0 =E2=80=A2 Reads may be reordered with older writes to different =
locations but not
>>> =C2=A0 =C2=A0 with older writes to the same location.
>>>
>>> It is certainly true that sparc64 could reorder with RMO. =C2=A0I belie=
ve ia64
>>> could reorder as well. =C2=A0Since sched_pin/unpin are frequently used =
to provide
>>> this sort of synchronization, we could use memory barriers in pin/unpin
>>> like so:
>>>
>>> sched_pin()
>>> {
>>> =C2=A0 =C2=A0 =C2=A0 td->td_pinned =3D atomic_load_acq_int(&td->td_pinn=
ed) + 1;
>>> }
>>>
>>> sched_unpin()
>>> {
>>> =C2=A0 =C2=A0 =C2=A0 atomic_store_rel_int(&td->td_pinned, td->td_pinned=
 - 1);
>>> }
>>>
>>> We could also just use atomic_add_acq_int() and atomic_sub_rel_int(), b=
ut they
>>> are slightly more heavyweight, though it would be more clear what is ha=
ppening
>>> I think.
>>
>> However, to actually get a race you'd have to have an interrupt fire and
>> migrate you so that the speculative read was from the other CPU. =C2=A0H=
owever, I
>> don't think the speculative read would be preserved in that case. =C2=A0=
The CPU
>> has to return to a specific PC when it returns from the interrupt and it=
 has
>> no way of storing the state for what speculative reordering it might be
>> doing, so presumably it is thrown away? =C2=A0I suppose it is possible t=
hat it
>> actually retires both instructions (but reordered) and then returns to t=
he PC
>> value after the read of listlocks after the interrupt. =C2=A0However, in=
 that case
>> the scheduler would not migrate as it would see td_pinned !=3D 0. =C2=A0=
To get the
>> race you have to have the interrupt take effect prior to modifying td_pi=
nned,
>> so I think the processor would have to discard the reordered read of
>> listlocks so it could safely resume execution at the 'incl' instruction.
>>
>> The other nit there on x86 at least is that the incl instruction is doin=
g
>> both a read and a write and another rule in the section 8.2.2 is this:
>>
>> =C2=A0=E2=80=A2 Reads are not reordered with other reads.
>>
>> That would seem to prevent the read of listlocks from passing the read o=
f
>> td_pinned in the incl instruction on x86.
>
> I wonder how that's interpreted in the microcode, though? =C2=A0I.e. if t=
he
> incr instruction decodes to load, add, store, does the h/w allow the
> later reads to pass the final store?
>
> I added the following:
>
> =C2=A0 =C2=A0 =C2=A0 =C2=A0sched_pin();
> =C2=A0 =C2=A0 =C2=A0 =C2=A0lock_list =3D PCPU_GET(spinlocks);
> =C2=A0 =C2=A0 =C2=A0 =C2=A0if (lock_list !=3D NULL && lock_list->ll_count=
 !=3D 0) {
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* XXX debug for bug 6=
7957 */
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 mfence();
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 lle =3D PCPU_GET(spinl=
ocks);
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 if (lle !=3D lock_list=
) {
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 panic("Bug 67957: had lock list %p, now %p\n",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 lock_list, lle);
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* XXX end debug */
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0sched_unpin();
>
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/*
>
> ... and the panic triggered. =C2=A0I think it's more likely that some
> barrier is needed in sched_pin() than that %gs is getting corrupted
> but can always be dereferenced.

Are the 2 values just different or one of the 2 is NULL?

Thanks,
Attilio


--=20
Peace can only be achieved by understanding - A. Einstein



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?AANLkTimHfXm3ap-MAMiQZ4i5F0XWdHKLqBz5RPUWoyt9>