Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 11 Aug 1997 00:52:24 -0500
From:      Chris Csanady <ccsanady@friley01.res.iastate.edu>
To:        freebsd-hackers@freebsd.org
Subject:   DISCUSS: interface for raw network driver..
Message-ID:  <199708110552.AAA01853@friley01.res.iastate.edu>

next in thread | raw e-mail | index | archive | help

I am writing a pseudo device that basically sits on top of a network driver,
does memory management, and exports the buffers to user-space.  For this to
work properly, the driver will have to support some extra functionlaity, but
it would be reletively simple.  The goal is to provide the highest possible
performance, while maintaining basic system protection boundries. There are 
basically 2 parts to it which I would like people to comment on..

First, a simple interface to network drivers to provide the lowest possible
send and receive functionality for use with such a device:

I would like to keep the interface between the two drivers as simple and
flexible as possible, as to permit future implementation in other drivers.
Basically, I am planning on the following..

o A driver specific send function that takes a kva, and length.

o An ack function in the pseudo-device to call when the send is complete.

o A buffer allocation function in the pseudo-device to be called from the
  driver, for allocating receive frame buffers.

o An input function in the pseudo-device to be called upon reception of
  a packet.

Is this a reasonable place to draw the line?

One thing I am stuck on is how I "attach" the pseudo-device to the net
interface.  Also the exporting of the basic functions from the driver,
and device to eachother.

Second, the device itself:

The device would have associated with it a large chunk of memory,
preferrably a multiple of the recently implemented 4MB pages.
(Thanks John!)  It would also manage a set of endpoints that you
could do IO on.  Once created, they would include foreign addresses,
ports, queues, max queue lengths, seq and ack numbers(maybe someday),
etc..  Each one will correspond to a VCI, and this is how the user
will differentiate between them.  

A program will use it by opening the special file, and mmapping the
whole chunk of device memory.  It will then create an endpoint,
specifying the foreign address, port number, queue length
restrictions, etc.  Now it can do IO by specificifying offset, size,
and VCI. (Currently, I am using a series of ioctls for this.  They
include allocation, freeing, sending, etc.. of endpoints and buffers)

The obvious drawback is that anyone using this interface will be
able to trash eachothers buffers, but this seems a reasonable
compromise.  With only one page in use, there will be no TLB thrashing,
and no overhead for vm mappings.  Ideally, it will be as close as
possible to providing a user level network driver without comprimising
overall system integrity.  Our initial use will be providing high
performance, low latency communication for our cluster.

Initially, this arhitecture will be used for gigabit and fast ethernet,
although if there are any glaring problems which would prevent use on
other network architectures, I would like to know.  Even with ethernet
however, it will allow use of non-standard frame sizes on hardware
which supports it, and will be a huge win.

Thoughts?

Chris Csanady






Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199708110552.AAA01853>