Date: Fri, 7 Nov 2003 16:53:05 -0500 (EST) From: Robert Watson <rwatson@freebsd.org> To: Jerry Toung <jtoung@arc.nasa.gov> Cc: hackers <hackers@freebsd.org> Subject: Re: sending messages, user process <--> kernel module Message-ID: <Pine.NEB.3.96L.1031107164220.21490I-100000@fledge.watson.org> In-Reply-To: <200311071202.50770.jtoung@arc.nasa.gov>
next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, 7 Nov 2003, Jerry Toung wrote: > I am trying to do asynchronous send/receive between a user process that > I am writing and a kernel module that I am also writing. I thought > about implementing something similar to unix routing socket, but I will > have to define a new domain and protosw. Beside that idea, what else > would you suggest? This is actually somewhat of a FAQ, since it comes up with relative frequency. I should dig up my most recent answer and forward that to you, but the quicky answers off the top of my head are: (1) One frequent answer is a pseudo-device -- for example, /dev/log buffers kernel log output for syslogd to pick up asynchronously. Arla and Coda both use pseudo-devices as a channel for local procedure calls to/from userspace to support their respective file systems using userspace cache managers. (2) Have the kernel open a file system FIFO and have the process on that FIFO. The client-side NFS locking code uses /var/run/lock to ship locking events to a userspace rpc.lockd. However, responses from rpc.lockd are then delivered to the kernel using a system call synchronously from the user process, as opposed to via a FIFO. (3) The routing socket approach can work quite well, especially if you need multicast semantics for messages, not to mention well-defined APIs for managing buffer size, etc. Another instance of this approach is PF_KEY, used for IPsec key management. As you point out, it requires digging into other code and a fair amount of implementation overhead. (4) You can have kernel code create and listen on sockets in existing domains, including UNIX domain sockets and TCP/IP sockets. The NFS client and server code both make use of sockets directly in the kernel for RPCs. Some of the particularly nice benefits of (2) and (4) is that it's easy to implement userspace test code, since the fifo/socket is just used as a rendezvous and doesn't care if the other end is in kernel or not. Likewise, the blocking/buffering/... semantics are quite well defined, which means you won't be tracking down wakeups, select semantics, thread behavior and synchronization, etc, as you might do in (1). Robert N M Watson FreeBSD Core Team, TrustedBSD Projects robert@fledge.watson.org Network Associates Laboratories
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.NEB.3.96L.1031107164220.21490I-100000>