From owner-freebsd-smp Thu Dec 7 21:24:11 2000 From owner-freebsd-smp@FreeBSD.ORG Thu Dec 7 21:24:09 2000 Return-Path: Delivered-To: freebsd-smp@freebsd.org Received: from mass.osd.bsdi.com (adsl-63-202-176-64.dsl.snfc21.pacbell.net [63.202.176.64]) by hub.freebsd.org (Postfix) with ESMTP id F202537B400 for ; Thu, 7 Dec 2000 21:24:07 -0800 (PST) Received: from mass.osd.bsdi.com (localhost [127.0.0.1]) by mass.osd.bsdi.com (8.11.0/8.11.1) with ESMTP id eB85XRN00458; Thu, 7 Dec 2000 21:33:27 -0800 (PST) (envelope-from msmith@mass.osd.bsdi.com) Message-Id: <200012080533.eB85XRN00458@mass.osd.bsdi.com> X-Mailer: exmh version 2.1.1 10/15/1999 To: Terry Lambert Cc: smp@FreeBSD.ORG Subject: Re: Netgraph and SMP In-reply-to: Your message of "Fri, 08 Dec 2000 03:50:04 GMT." <200012080350.UAA03298@usr08.primenet.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Date: Thu, 07 Dec 2000 21:33:27 -0800 From: Mike Smith Sender: msmith@mass.osd.bsdi.com Sender: owner-freebsd-smp@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.org > > > In Solaris, the entry into the driver would hold a reference, > > > which would result in the reference count being incremented. > > > Only modules with a 0 reference count can be unloaded. This > > > same mechanism is used for vnodes, and for modules on which > > > other modules depend. It works well, ans is very light weight. > > > > The whole problem is that it *isn't* very light weight. > > > > The reference count has to be atomic, which means that it ping-pongs > > around from CPU to CPU, causing a lot of extra cache traffic. > > > > OTOH, there's not much we can do about this short of going looking for > > better multi-CPU reference count implementations once we have time to > > worry about performance. > > Actually, you can just put it in non-cacheable memory, and the > penalty will only be paid by the CPU(s) doing the referencing. Yes. And you'll pay the penalty *all* the time. At least when the ping-pong is going on, there will be times when you'll hit the counter valid in your own cache. Marking it uncacheable (or even write-back cacheable) is worse. > Still, for a very large number of CPUs, this would work fine > for all but frequently contended objects. Er. We're talking about an object which is susceptible to being *very* frequently contended. > I think that it is making more and more sense to lock interrupts > to a single CPU. No, it's not. Stop this nonsense. It's not even practical on some of the platforms we're looking at. > What happens if you write to a page that's marked non-cachable > on the CPU on which you are running, but cacheable on another > CPU? Does it do the right thing, and update the cache on the > caching CPU? Er, what are you smoking Terry? You never 'update' the cache on another processor; the other processor snoops your cache/memory activity and invalidates its own cache based on your broadcasts. > If so, locking the interrupt processing for each > card to a particular CPU could be very worthwhile, since you > would never take the hit, unless you were doing something > extraordinary. With the way our I/O structure is currently laid out, this blows because you end up serialising everything. -- ... every activity meets with opposition, everyone who acts has his rivals and unfortunately opponents also. But not because people want to be opponents, rather because the tasks and relationships force people to take different points of view. [Dr. Fritz Todt] V I C T O R Y N O T V E N G E A N C E To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-smp" in the body of the message