Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 30 Jul 2010 12:44:13 +0300
From:      Kostik Belousov <kostikbel@gmail.com>
To:        mdf@freebsd.org
Cc:        freebsd-hackers@freebsd.org
Subject:   Re: sched_pin() versus PCPU_GET
Message-ID:  <20100730094413.GJ22295@deviant.kiev.zoral.com.ua>
In-Reply-To: <AANLkTimGjNATWmuGqTDMFQ0r3gHnsv0Bc69pBb6QYO9L@mail.gmail.com>
References:  <AANLkTikY20TxyeyqO5zP3zC-azb748kV-MdevPfm%2B8cq@mail.gmail.com> <AANLkTimGjNATWmuGqTDMFQ0r3gHnsv0Bc69pBb6QYO9L@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help

--OdQvBiqfLsaeimeB
Content-Type: text/plain; charset=koi8-r
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Thu, Jul 29, 2010 at 04:57:25PM -0700, mdf@freebsd.org wrote:
> On Thu, Jul 29, 2010 at 4:39 PM,  <mdf@freebsd.org> wrote:
> > We've seen a few instances at work where witness_warn() in ast()
> > indicates the sched lock is still held, but the place it claims it was
> > held by is in fact sometimes not possible to keep the lock, like:
> >
> > =9A =9A =9A =9Athread_lock(td);
> > =9A =9A =9A =9Atd->td_flags &=3D ~TDF_SELECT;
> > =9A =9A =9A =9Athread_unlock(td);
> >
> > What I was wondering is, even though the assembly I see in objdump -S
> > for witness_warn has the increment of td_pinned before the PCPU_GET:
> >
> > ffffffff802db210: =9A =9A =9A 65 48 8b 1c 25 00 00 =9A =9Amov =9A =9A%g=
s:0x0,%rbx
> > ffffffff802db217: =9A =9A =9A 00 00
> > ffffffff802db219: =9A =9A =9A ff 83 04 01 00 00 =9A =9A =9A incl =9A 0x=
104(%rbx)
> > =9A =9A =9A =9A * Pin the thread in order to avoid problems with thread=
 migration.
> > =9A =9A =9A =9A * Once that all verifies are passed about spinlocks own=
ership,
> > =9A =9A =9A =9A * the thread is in a safe path and it can be unpinned.
> > =9A =9A =9A =9A */
> > =9A =9A =9A =9Asched_pin();
> > =9A =9A =9A =9Alock_list =3D PCPU_GET(spinlocks);
> > ffffffff802db21f: =9A =9A =9A 65 48 8b 04 25 48 00 =9A =9Amov =9A =9A%g=
s:0x48,%rax
> > ffffffff802db226: =9A =9A =9A 00 00
> > =9A =9A =9A =9Aif (lock_list !=3D NULL && lock_list->ll_count !=3D 0) {
> > ffffffff802db228: =9A =9A =9A 48 85 c0 =9A =9A =9A =9A =9A =9A =9A =9At=
est =9A %rax,%rax
> > =9A =9A =9A =9A * Pin the thread in order to avoid problems with thread=
 migration.
> > =9A =9A =9A =9A * Once that all verifies are passed about spinlocks own=
ership,
> > =9A =9A =9A =9A * the thread is in a safe path and it can be unpinned.
> > =9A =9A =9A =9A */
> > =9A =9A =9A =9Asched_pin();
> > =9A =9A =9A =9Alock_list =3D PCPU_GET(spinlocks);
> > ffffffff802db22b: =9A =9A =9A 48 89 85 f0 fe ff ff =9A =9Amov =9A =9A%r=
ax,-0x110(%rbp)
> > ffffffff802db232: =9A =9A =9A 48 89 85 f8 fe ff ff =9A =9Amov =9A =9A%r=
ax,-0x108(%rbp)
> > =9A =9A =9A =9Aif (lock_list !=3D NULL && lock_list->ll_count !=3D 0) {
> > ffffffff802db239: =9A =9A =9A 0f 84 ff 00 00 00 =9A =9A =9A je =9A =9A =
ffffffff802db33e
> > <witness_warn+0x30e>
> > ffffffff802db23f: =9A =9A =9A 44 8b 60 50 =9A =9A =9A =9A =9A =9A mov =
=9A =9A0x50(%rax),%r12d
> >
> > is it possible for the hardware to do any re-ordering here?
> >
> > The reason I'm suspicious is not just that the code doesn't have a
> > lock leak at the indicated point, but in one instance I can see in the
> > dump that the lock_list local from witness_warn is from the pcpu
> > structure for CPU 0 (and I was warned about sched lock 0), but the
> > thread id in panic_cpu is 2. =9ASo clearly the thread was being migrated
> > right around panic time.
> >
> > This is the amd64 kernel on stable/7. =9AI'm not sure exactly what kind
> > of hardware; it's a 4-way Intel chip from about 3 or 4 years ago IIRC.
> >
> > So... do we need some kind of barrier in the code for sched_pin() for
> > it to really do what it claims? =9ACould the hardware have re-ordered
> > the "mov =9A =9A%gs:0x48,%rax" PCPU_GET to before the sched_pin()
> > increment?
>=20
> So after some research, the answer I'm getting is "maybe".  What I'm
> concerned about is whether the h/w reordered the read of PCPU_GET in
> front of the previous store to increment td_pinned.  While not an
> ultimate authority,
> http://en.wikipedia.org/wiki/Memory_ordering#In_SMP_microprocessor_systems
> implies that stores can be reordered after loads for both Intel and
> amd64 chips, which would I believe account for the behavior seen here.
>=20

Am I right that you suggest that in the sequence
	mov	%gs:0x0,%rbx      [1]
	incl	0x104(%rbx)       [2]
	mov	%gs:0x48,%rax     [3]
interrupt and preemption happen between points [2] and [3] ?
And the %rax value after the thread was put back onto the (different) new
CPU and executed [3] was still from the old cpu' pcpu area ?

I do not believe this is possible. CPU is always self-consistent. Context
switch from the thread can only occur on the return from interrupt
handler, in critical_exit() or such. This code is executing on the
same processor, and thus should already see the effect of [2], that
would prevent context switch.

If interrupt happens between [1] and [2], then context saving code
should still see the consistent view of the register file state,
regardless of the processor issuing speculative read of
*%gs:0x48. Return from the interrupt is the serialization point due to
iret, causing read in [3] to be reissued.


--OdQvBiqfLsaeimeB
Content-Type: application/pgp-signature
Content-Disposition: inline

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (FreeBSD)

iEYEARECAAYFAkxSnuwACgkQC3+MBN1Mb4hAcwCgpwr8EgJm76cM3HJSlDyM9MaF
8UcAn2570On4CnWqPKpIDR70UoY+AVg9
=EFO7
-----END PGP SIGNATURE-----

--OdQvBiqfLsaeimeB--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20100730094413.GJ22295>