From owner-freebsd-hackers@FreeBSD.ORG Wed Aug 4 14:27:16 2010 Return-Path: Delivered-To: freebsd-hackers@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 5C42B1065678; Wed, 4 Aug 2010 14:27:16 +0000 (UTC) (envelope-from jhb@freebsd.org) Received: from cyrus.watson.org (cyrus.watson.org [65.122.17.42]) by mx1.freebsd.org (Postfix) with ESMTP id 06B3E8FC1D; Wed, 4 Aug 2010 14:27:16 +0000 (UTC) Received: from bigwig.baldwin.cx (66.111.2.69.static.nyinternet.net [66.111.2.69]) by cyrus.watson.org (Postfix) with ESMTPSA id 9C1EC46B99; Wed, 4 Aug 2010 10:27:15 -0400 (EDT) Received: from jhbbsd.localnet (smtp.hudson-trading.com [209.249.190.9]) by bigwig.baldwin.cx (Postfix) with ESMTPSA id C1FC28A04E; Wed, 4 Aug 2010 10:27:14 -0400 (EDT) From: John Baldwin To: mdf@freebsd.org Date: Wed, 4 Aug 2010 10:26:17 -0400 User-Agent: KMail/1.13.5 (FreeBSD/7.3-CBSD-20100217; KDE/4.4.5; amd64; ; ) References: <201007301031.34266.jhb@freebsd.org> In-Reply-To: MIME-Version: 1.0 Content-Type: Text/Plain; charset="windows-1252" Content-Transfer-Encoding: quoted-printable Message-Id: <201008041026.17553.jhb@freebsd.org> X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.0.1 (bigwig.baldwin.cx); Wed, 04 Aug 2010 10:27:14 -0400 (EDT) X-Virus-Scanned: clamav-milter 0.95.1 at bigwig.baldwin.cx X-Virus-Status: Clean X-Spam-Status: No, score=-2.6 required=4.2 tests=AWL,BAYES_00 autolearn=ham version=3.2.5 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on bigwig.baldwin.cx Cc: freebsd-hackers@freebsd.org Subject: Re: sched_pin() versus PCPU_GET X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 04 Aug 2010 14:27:16 -0000 On Tuesday, August 03, 2010 9:46:16 pm mdf@freebsd.org wrote: > On Fri, Jul 30, 2010 at 2:31 PM, John Baldwin wrote: > > On Friday, July 30, 2010 10:08:22 am John Baldwin wrote: > >> On Thursday, July 29, 2010 7:39:02 pm mdf@freebsd.org wrote: > >> > We've seen a few instances at work where witness_warn() in ast() > >> > indicates the sched lock is still held, but the place it claims it w= as > >> > held by is in fact sometimes not possible to keep the lock, like: > >> > > >> > thread_lock(td); > >> > td->td_flags &=3D ~TDF_SELECT; > >> > thread_unlock(td); > >> > > >> > What I was wondering is, even though the assembly I see in objdump -S > >> > for witness_warn has the increment of td_pinned before the PCPU_GET: > >> > > >> > ffffffff802db210: 65 48 8b 1c 25 00 00 mov %gs:0x0,%rbx > >> > ffffffff802db217: 00 00 > >> > ffffffff802db219: ff 83 04 01 00 00 incl 0x104(%rbx) > >> > * Pin the thread in order to avoid problems with thread migrati= on. > >> > * Once that all verifies are passed about spinlocks ownership, > >> > * the thread is in a safe path and it can be unpinned. > >> > */ > >> > sched_pin(); > >> > lock_list =3D PCPU_GET(spinlocks); > >> > ffffffff802db21f: 65 48 8b 04 25 48 00 mov %gs:0x48,%rax > >> > ffffffff802db226: 00 00 > >> > if (lock_list !=3D NULL && lock_list->ll_count !=3D 0) { > >> > ffffffff802db228: 48 85 c0 test %rax,%rax > >> > * Pin the thread in order to avoid problems with thread migrati= on. > >> > * Once that all verifies are passed about spinlocks ownership, > >> > * the thread is in a safe path and it can be unpinned. > >> > */ > >> > sched_pin(); > >> > lock_list =3D PCPU_GET(spinlocks); > >> > ffffffff802db22b: 48 89 85 f0 fe ff ff mov %rax,-0x110(%rbp) > >> > ffffffff802db232: 48 89 85 f8 fe ff ff mov %rax,-0x108(%rbp) > >> > if (lock_list !=3D NULL && lock_list->ll_count !=3D 0) { > >> > ffffffff802db239: 0f 84 ff 00 00 00 je ffffffff802db33e > >> > > >> > ffffffff802db23f: 44 8b 60 50 mov 0x50(%rax),%r12d > >> > > >> > is it possible for the hardware to do any re-ordering here? > >> > > >> > The reason I'm suspicious is not just that the code doesn't have a > >> > lock leak at the indicated point, but in one instance I can see in t= he > >> > dump that the lock_list local from witness_warn is from the pcpu > >> > structure for CPU 0 (and I was warned about sched lock 0), but the > >> > thread id in panic_cpu is 2. So clearly the thread was being migrat= ed > >> > right around panic time. > >> > > >> > This is the amd64 kernel on stable/7. I'm not sure exactly what kind > >> > of hardware; it's a 4-way Intel chip from about 3 or 4 years ago IIR= C. > >> > > >> > So... do we need some kind of barrier in the code for sched_pin() for > >> > it to really do what it claims? Could the hardware have re-ordered > >> > the "mov %gs:0x48,%rax" PCPU_GET to before the sched_pin() > >> > increment? > >> > >> Hmmm, I think it might be able to because they refer to different loca= tions. > >> > >> Note this rule in section 8.2.2 of Volume 3A: > >> > >> =95 Reads may be reordered with older writes to different locations = but not > >> with older writes to the same location. > >> > >> It is certainly true that sparc64 could reorder with RMO. I believe i= a64 > >> could reorder as well. Since sched_pin/unpin are frequently used to p= rovide > >> this sort of synchronization, we could use memory barriers in pin/unpin > >> like so: > >> > >> sched_pin() > >> { > >> td->td_pinned =3D atomic_load_acq_int(&td->td_pinned) + 1; > >> } > >> > >> sched_unpin() > >> { > >> atomic_store_rel_int(&td->td_pinned, td->td_pinned - 1); > >> } > >> > >> We could also just use atomic_add_acq_int() and atomic_sub_rel_int(), = but they > >> are slightly more heavyweight, though it would be more clear what is h= appening > >> I think. > > > > However, to actually get a race you'd have to have an interrupt fire and > > migrate you so that the speculative read was from the other CPU. Howev= er, I > > don't think the speculative read would be preserved in that case. The = CPU > > has to return to a specific PC when it returns from the interrupt and i= t has > > no way of storing the state for what speculative reordering it might be > > doing, so presumably it is thrown away? I suppose it is possible that = it > > actually retires both instructions (but reordered) and then returns to = the PC > > value after the read of listlocks after the interrupt. However, in tha= t case > > the scheduler would not migrate as it would see td_pinned !=3D 0. To g= et the > > race you have to have the interrupt take effect prior to modifying td_p= inned, > > so I think the processor would have to discard the reordered read of > > listlocks so it could safely resume execution at the 'incl' instruction. > > > > The other nit there on x86 at least is that the incl instruction is doi= ng > > both a read and a write and another rule in the section 8.2.2 is this: > > > > =95 Reads are not reordered with other reads. > > > > That would seem to prevent the read of listlocks from passing the read = of > > td_pinned in the incl instruction on x86. >=20 > I wonder how that's interpreted in the microcode, though? I.e. if the > incr instruction decodes to load, add, store, does the h/w allow the > later reads to pass the final store? Well, the architecture is defined in terms of the ISA, not the microcode, p= er se, so I think it would have to treat the read for the incl as being an ear= lier read than 'spinlocks'. > I added the following: >=20 > sched_pin(); > lock_list =3D PCPU_GET(spinlocks); > if (lock_list !=3D NULL && lock_list->ll_count !=3D 0) { > + /* XXX debug for bug 67957 */ > + mfence(); > + lle =3D PCPU_GET(spinlocks); > + if (lle !=3D lock_list) { > + panic("Bug 67957: had lock list %p, now %p\n", > + lock_list, lle); > + } > + /* XXX end debug */ > sched_unpin(); >=20 > /* >=20 > ... and the panic triggered. I think it's more likely that some > barrier is needed in sched_pin() than that %gs is getting corrupted > but can always be dereferenced. Actually, I would beg to differ in that case. If PCPU_GET(spinlocks) returns non-NULL, then it means that you hold a spin lock, which means that interrupts are disabled and have been disabled for "a while" (from when the spin lock was acquired prior trying to acquire the lock that you hold now). I think that means that the only way you can have a problem is if you get an NMI. Do you have custom logic in your NMI handler? Perhaps it isn't calling swapgs correctly (either when it shouldn't, or isn't calling it when it should). I know that the Joseph Koshy spent a good bit of time getting the %gs handling in the NMI handler correct to handle NMIs firing at "bad" times (such as during the very end of a syscall return). It is worth noting that when the spinlock was acquired "ages" ago, it did a critical_enter() which would have forbid any context switches, so the thread would not have migrated to another CPU. I think it is almost certainly a corrupt %gs. > An mfence() at the end of sched_pin() would be sufficient, but it > seems like overkill since all we really need is to prevent instruction > re-ordering. As I said above, on PowerPC this would be isync; what is > the equivalent on x86? I can try it out and see if this panic goes > away. mfence() is the closest thing x86 has. It does not have an equivalent to isync as it is still defined to complete instructions in "program order". It does not reorder instructions, merely the scheduling of memory transactions from what I can tell. Mostly I think that all this is just to allow it to store writes in its store buffer, but a CPU will snoop its own store buffer to satisfy reads IIRC, so by and large a CPU is consistent with itself. =2D-=20 John Baldwin