Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 8 Dec 2000 14:04:54 -0800
From:      Alfred Perlstein <bright@wintelcom.net>
To:        Terry Lambert <tlambert@primenet.com>
Cc:        Poul-Henning Kamp <phk@critter.freebsd.dk>, Chuck Paterson <cp@bsdi.com>, Mike Smith <msmith@FreeBSD.ORG>, smp@FreeBSD.ORG
Subject:   Re: Netgraph and SMP
Message-ID:  <20001208140454.S16205@fw.wintelcom.net>
In-Reply-To: <200012082151.OAA24586@usr01.primenet.com>; from tlambert@primenet.com on Fri, Dec 08, 2000 at 09:51:41PM %2B0000
References:  <80663.976310686@critter> <200012082151.OAA24586@usr01.primenet.com>

next in thread | previous in thread | raw e-mail | index | archive | help
* Terry Lambert <tlambert@primenet.com> [001208 13:52] wrote:
> > >For uses such as barriers for loading and unloading it is 
> > >possible to have the counters and entry barriers all PCPU. You can then
> > >use more complex mechanisms to set the low level barrier and interrogate
> > >the counters. Terry ->>may<<- view this as another way of doing
> > >what he is suggesting.
> > 
> > The thing that has me worried here is that using locking (as opposed
> > to atomic ops) in netgraph means that will expose netgraph paths to
> > heavy-duty locking synchronism, since TCP, UDP, IP, Mbuf will also
> > use a (separate) locking domain.
> 
> It ought to reference on entry to netgraph; this will let it
> avoid locks for everything but adjusting the reference count,
> and won't damage reentrancy.
> 
> It would mean stalling all of netgraph when loading or unloading
> a module, or establishing a connection or disconnecting nodes,
> though, but this might be OK.
> 
> Sort of a BGL with multiple users/single manipulator for Netgraph.
> 
> 
> I think it's overly complicated, but you could also support the
> idea of per CPU connectedness, which would build the graph out
> on each CPU, making the relationship between nodes a per CPU
> thing.  This would mean that you would not need to contend to
> build a graph per CPU, but that you would have to duplicate the
> build on each CPU.  This would reduce to dealing with the lock
> only on load and unload (again).  Each graph would need to set
> pointers to a shared state data object, though, since they are
> effectively the same objects with different function pointer
> linkages.

*hack* *hack* *hack* *hack* *hack* *hack* *hack* *hack* 

(me not you. :))

netgraph_locks[NCPU];
netgraph_refs[NCPU];

unload() {
	for (i = 0; i< NCPU; i++)
		if (!mutex_try(&netgraph_locks[i]) {
			i--;
			goto error;
		} else if (netgraph_refs[i] != 0) {
			goto error;
		}
	
	do_unload();
	return (0);
error:
	while (i--)
		mutex_unlock(&netgraph_locks[i])

	return (EBUSY);
}

*hack* *hack* *hack* *hack* *hack* *hack* *hack* *hack* 

?

-- 
-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]
"I have the heart of a child; I keep it in a jar on my desk."


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-smp" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20001208140454.S16205>