Date: Tue, 15 Dec 1998 12:56:21 -0800 (PST) From: Archie Cobbs <archie@whistle.com> To: johan@granlund.nu (Johan Granlund) Cc: phk@critter.freebsd.dk, jmb@FreeBSD.ORG, julian@whistle.com, lars@akerlings.t.se, current@FreeBSD.ORG, isdn@FreeBSD.ORG Subject: Re: if_sppp is BROKEN!!! Message-ID: <199812152056.MAA00665@bubba.whistle.com> In-Reply-To: <Pine.BSF.4.05.9812151823070.11916-100000@phoenix.granlund.nu> from Johan Granlund at "Dec 15, 98 06:49:13 pm"
next in thread | previous in thread | raw e-mail | index | archive | help
Johan Granlund writes:
> What i was thinking about was something more like lowbandwidth / high
> processing protocols. If the endpoint is a serial port, network interface
> or the network protocol stack (for tunneling) should't be a issue if
> it's used right.
Here's some examples of what we use the netgraph stuff for on the
InterJet.
As quick background, netgraph nodes are in the kernel and represent
the atomic units of the netgraph system. Each node has 'hooks'
which can be connected to other netgraph nodes via their corresponding
hooks. If you're into graph theory, node == node and joined pair
of hooks == edge. Data travels in mbufs from node to node via
their connected hooks. There is also a synchronous command/response
message capability, for configuration, misc other stuff, etc..
Nodes normally run at splnet(), but can run at different spl's if
need be; we have routines to handle the required queueing.
Examples of nodes we've actually written and use in production
(some of these are proprietary and can't be released just yet):
- Socket node
This is a netgraph node which is also a socket in the family
PF_NETGRAPH. Allows user mode programs to participate in the
netgraph system and act just like any other node. We've also
written a user level library to make communicating with netgraph
sockets easy.
- Async node
This is a netgraph node which is also a line discipline.
It does PPP async <-> sync conversion, by connecting to
a serial line and having a hook for sending/receiving
synchronous PPP frames (which get converted) to the serial line.
- Interface node
This is a netgraph node which is also a (point-to-point) interface.
It has hooks for each protocol family. Packets are forwarded
between the interface and the hooks.
- Cisco HDLC node
This node takes raw frames on one side and demultiplexes them
according to the Cisco HDLC protocol into IP, AppleTalk, etc.
Also handles keep-alives. Typically, you'd connect each protocol
hook to the corresponding protocol hook of an interface node.
- Frame relay node
Receives raw frame relay frames, and has hooks for each DLCI.
- Frame relay LMI
Hook this to DLCI 0 and DLCI 1023 to do auto-detection of
frame relay LMI type, and perform the appropriate LMI protocol.
- RFC 1490
Protocol demux'ing according to RFC 1490. Used on many frame
relay links.
- ISDN node
Is a device driver and a netgraph node. Performs the D channel
signalling and has a hook for each B channel. Accepts synchronous
commands for things like dialing, etc.
- Synchronous card node
We have a synchronous card that is also a node with a single
hook for the input/output of raw HDLC frames.
- Other nodes.. mostly for debugging...
Echo node - echo frames back on the hook whence they came
Hole node - consume and discard all frames received
Tee node - dupliacate each frame that passes through it and
send it out via a different hook
This system has really worked out great. If we wanted to do something
wacky like run frame relay over the ISDN B channel, it would be
trivial to set up. If you've seen the InterJet synchronous port
configuration page, you can see how we support all the different
ways of configuring that port -- each configuration just represents
a different netgraph setup.
The more I work with it, the more I realize that a major benefit
is that it provides a clean and efficient way for user level programs
to communicate directly with low-level kernel drivers and stuff --
and in more interesting ways than a /dev entry allows. It's a step
up from the /dev/foo* and ioctl() method of communicating. The
simplicity of nodes means you can get the minimal kernel stuff done
first and develop the higher layer protocols in user space, where
debugging is easier. Then when it's all working, turn it into a
kernel netgraph node -- none of it's neighbors will know the difference.
Of course the other major benefit is modularity. Instead of having
if_ppp.c, if_sppp.c, if_foo1.c, if_foo2.c, where you are reimplementing
the interface behavior code over and over again, you just keep this
code in a single place: ng_iface.c, the interface node. Then anybody
who needs to export an interface can do so by connecting it to an
interface node (example: cisco hdlc node).
These little guys make great LKM/KLD modules too..
Anyway, one reason I'm hyping this a little bit is because we
(Julian and me) want to clean up, and update, and unencumber the
netgraph code that was released a year or so ago, and check it in
so people can start playing with it more. It will take a little time
though, not to mention approval from the kernel gargoyles...
-Archie
___________________________________________________________________________
Archie Cobbs * Whistle Communications, Inc. * http://www.whistle.com
To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-current" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199812152056.MAA00665>
