Date: Fri, 30 Jul 2010 06:44:00 -0700 From: mdf@FreeBSD.org To: Kostik Belousov <kostikbel@gmail.com> Cc: freebsd-hackers@freebsd.org Subject: Re: sched_pin() versus PCPU_GET Message-ID: <AANLkTi=PFxARt8Jw0fq09gWEzZgAeeQxRyrBHKYa2PXq@mail.gmail.com> In-Reply-To: <20100730094413.GJ22295@deviant.kiev.zoral.com.ua> References: <AANLkTikY20TxyeyqO5zP3zC-azb748kV-MdevPfm%2B8cq@mail.gmail.com> <AANLkTimGjNATWmuGqTDMFQ0r3gHnsv0Bc69pBb6QYO9L@mail.gmail.com> <20100730094413.GJ22295@deviant.kiev.zoral.com.ua>
next in thread | previous in thread | raw e-mail | index | archive | help
2010/7/30 Kostik Belousov <kostikbel@gmail.com>: > On Thu, Jul 29, 2010 at 04:57:25PM -0700, mdf@freebsd.org wrote: >> On Thu, Jul 29, 2010 at 4:39 PM, =A0<mdf@freebsd.org> wrote: >> > We've seen a few instances at work where witness_warn() in ast() >> > indicates the sched lock is still held, but the place it claims it was >> > held by is in fact sometimes not possible to keep the lock, like: >> > >> > =A0 =A0 =A0 =A0thread_lock(td); >> > =A0 =A0 =A0 =A0td->td_flags &=3D ~TDF_SELECT; >> > =A0 =A0 =A0 =A0thread_unlock(td); >> > >> > What I was wondering is, even though the assembly I see in objdump -S >> > for witness_warn has the increment of td_pinned before the PCPU_GET: >> > >> > ffffffff802db210: =A0 =A0 =A0 65 48 8b 1c 25 00 00 =A0 =A0mov =A0 =A0%= gs:0x0,%rbx >> > ffffffff802db217: =A0 =A0 =A0 00 00 >> > ffffffff802db219: =A0 =A0 =A0 ff 83 04 01 00 00 =A0 =A0 =A0 incl =A0 0= x104(%rbx) >> > =A0 =A0 =A0 =A0 * Pin the thread in order to avoid problems with threa= d migration. >> > =A0 =A0 =A0 =A0 * Once that all verifies are passed about spinlocks ow= nership, >> > =A0 =A0 =A0 =A0 * the thread is in a safe path and it can be unpinned. >> > =A0 =A0 =A0 =A0 */ >> > =A0 =A0 =A0 =A0sched_pin(); >> > =A0 =A0 =A0 =A0lock_list =3D PCPU_GET(spinlocks); >> > ffffffff802db21f: =A0 =A0 =A0 65 48 8b 04 25 48 00 =A0 =A0mov =A0 =A0%= gs:0x48,%rax >> > ffffffff802db226: =A0 =A0 =A0 00 00 >> > =A0 =A0 =A0 =A0if (lock_list !=3D NULL && lock_list->ll_count !=3D 0) = { >> > ffffffff802db228: =A0 =A0 =A0 48 85 c0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= test =A0 %rax,%rax >> > =A0 =A0 =A0 =A0 * Pin the thread in order to avoid problems with threa= d migration. >> > =A0 =A0 =A0 =A0 * Once that all verifies are passed about spinlocks ow= nership, >> > =A0 =A0 =A0 =A0 * the thread is in a safe path and it can be unpinned. >> > =A0 =A0 =A0 =A0 */ >> > =A0 =A0 =A0 =A0sched_pin(); >> > =A0 =A0 =A0 =A0lock_list =3D PCPU_GET(spinlocks); >> > ffffffff802db22b: =A0 =A0 =A0 48 89 85 f0 fe ff ff =A0 =A0mov =A0 =A0%= rax,-0x110(%rbp) >> > ffffffff802db232: =A0 =A0 =A0 48 89 85 f8 fe ff ff =A0 =A0mov =A0 =A0%= rax,-0x108(%rbp) >> > =A0 =A0 =A0 =A0if (lock_list !=3D NULL && lock_list->ll_count !=3D 0) = { >> > ffffffff802db239: =A0 =A0 =A0 0f 84 ff 00 00 00 =A0 =A0 =A0 je =A0 =A0= ffffffff802db33e >> > <witness_warn+0x30e> >> > ffffffff802db23f: =A0 =A0 =A0 44 8b 60 50 =A0 =A0 =A0 =A0 =A0 =A0 mov = =A0 =A00x50(%rax),%r12d >> > >> > is it possible for the hardware to do any re-ordering here? >> > >> > The reason I'm suspicious is not just that the code doesn't have a >> > lock leak at the indicated point, but in one instance I can see in the >> > dump that the lock_list local from witness_warn is from the pcpu >> > structure for CPU 0 (and I was warned about sched lock 0), but the >> > thread id in panic_cpu is 2. =A0So clearly the thread was being migrat= ed >> > right around panic time. >> > >> > This is the amd64 kernel on stable/7. =A0I'm not sure exactly what kin= d >> > of hardware; it's a 4-way Intel chip from about 3 or 4 years ago IIRC. >> > >> > So... do we need some kind of barrier in the code for sched_pin() for >> > it to really do what it claims? =A0Could the hardware have re-ordered >> > the "mov =A0 =A0%gs:0x48,%rax" PCPU_GET to before the sched_pin() >> > increment? >> >> So after some research, the answer I'm getting is "maybe". =A0What I'm >> concerned about is whether the h/w reordered the read of PCPU_GET in >> front of the previous store to increment td_pinned. =A0While not an >> ultimate authority, >> http://en.wikipedia.org/wiki/Memory_ordering#In_SMP_microprocessor_syste= ms >> implies that stores can be reordered after loads for both Intel and >> amd64 chips, which would I believe account for the behavior seen here. > > Am I right that you suggest that in the sequence > =A0 =A0 =A0 =A0mov =A0 =A0 %gs:0x0,%rbx =A0 =A0 =A0[1] > =A0 =A0 =A0 =A0incl =A0 =A00x104(%rbx) =A0 =A0 =A0 [2] > =A0 =A0 =A0 =A0mov =A0 =A0 %gs:0x48,%rax =A0 =A0 [3] > interrupt and preemption happen between points [2] and [3] ? > And the %rax value after the thread was put back onto the (different) new > CPU and executed [3] was still from the old cpu' pcpu area ? Right, but I'm also asking if it's possible the hardware executed the instructions as: =A0 =A0 =A0 =A0mov =A0 =A0 %gs:0x0,%rbx =A0 =A0 =A0[1] =A0 =A0 =A0 =A0mov =A0 =A0 %gs:0x48,%rax =A0 =A0 [3] =A0 =A0 =A0 =A0incl =A0 =A00x104(%rbx) =A0 =A0 =A0 [2] On PowerPC this is definitely possible and I'd use an isync to prevent the re-ordering. I haven't been able to confirm that Intel/AMD present such a strict ordering that no barrier is needed. It's admittedly a very tight window, and we've only seen it twice, but I have no other way to explain the symptom. Unfortunately in the dump gdb shows both %rax and %gs as 0, so I can't confirm that they had a value I'd expect from another CPU. The only thing I do have is panic_cpu being different than the CPU at the time of PCPU_GET(spinlock), but of course there's definitely a window there. > I do not believe this is possible. CPU is always self-consistent. Context > switch from the thread can only occur on the return from interrupt > handler, in critical_exit() or such. This code is executing on the > same processor, and thus should already see the effect of [2], that > would prevent context switch. Right, but if the hardware allowed reads to pass writes, then %rax would have an incorrect value which would be saved at interrupt time, and restored on another processor. I can add a few sanity asserts to try to prove this one way or another and hope they don't mess with the timing; this has only shown up when testing with a hugely multi-threaded CIFS server. The only reason I'm hammering at OOO execution being the explanation is that it seems like the only way to explain the symptoms... unless I prefer to believe that PCPU_GET is completely busted, which seems less likely. Thanks, matthew
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?AANLkTi=PFxARt8Jw0fq09gWEzZgAeeQxRyrBHKYa2PXq>