Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 7 Nov 2003 14:41:22 -0800
From:      Jerry Toung <jtoung@arc.nasa.gov>
To:        Robert Watson <rwatson@freebsd.org>
Cc:        hackers <hackers@freebsd.org>
Subject:   Re: sending messages, user process <--> kernel module
Message-ID:  <200311071441.22114.jtoung@arc.nasa.gov>
In-Reply-To: <Pine.NEB.3.96L.1031107164220.21490I-100000@fledge.watson.org>
References:  <Pine.NEB.3.96L.1031107164220.21490I-100000@fledge.watson.org>

next in thread | previous in thread | raw e-mail | index | archive | help
Thank you very much for the inputs.

On Friday 07 November 2003 01:53 pm, Robert Watson wrote:
> On Fri, 7 Nov 2003, Jerry Toung wrote:
> > I am trying to do asynchronous send/receive between a user process th=
at
> > I am writing and a kernel module that I am also writing.  I thought
> > about implementing something similar to unix routing socket, but I wi=
ll
> > have to define a new domain and protosw.  Beside that idea, what else
> > would you suggest?
>
> This is actually somewhat of a FAQ, since it comes up with relative
> frequency.  I should dig up my most recent answer and forward that to y=
ou,
> but the quicky answers off the top of my head are:
>
> (1) One frequent answer is a pseudo-device -- for example, /dev/log
>     buffers kernel log output for syslogd to pick up asynchronously.  A=
rla
>     and Coda both use pseudo-devices as a channel for local procedure
>     calls to/from userspace to support their respective file systems us=
ing
>     userspace cache managers.
>
> (2) Have the kernel open a file system FIFO and have the process on tha=
t
>     FIFO.  The client-side NFS locking code uses /var/run/lock to ship
>     locking events to a userspace rpc.lockd.  However, responses from
>     rpc.lockd are then delivered to the kernel using a system call
>     synchronously from the user process, as opposed to via a FIFO.
>
> (3) The routing socket approach can work quite well, especially if you
>     need multicast semantics for messages, not to mention well-defined
>     APIs for managing buffer size, etc. Another instance of this approa=
ch
>     is PF_KEY, used for IPsec key management.  As you point out, it
>     requires digging into other code and a fair amount of implementatio=
n
>     overhead.
>
> (4) You can have kernel code create and listen on sockets in existing
>     domains, including UNIX domain sockets and TCP/IP sockets.  The NFS
>     client and server code both make use of sockets directly in the
>     kernel for RPCs.
>
> Some of the particularly nice benefits of (2) and (4) is that it's easy=
 to
> implement userspace test code, since the fifo/socket is just used as a
> rendezvous and doesn't care if the other end is in kernel or not.
> Likewise, the blocking/buffering/... semantics are quite well defined,
> which means you won't be tracking down wakeups, select semantics, threa=
d
> behavior and synchronization, etc, as you might do in (1).
>
> Robert N M Watson             FreeBSD Core Team, TrustedBSD Projects
> robert@fledge.watson.org      Network Associates Laboratories



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200311071441.22114.jtoung>