Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 11 Jul 2003 13:22:18 -0700
From:      John-Mark Gurney <gurney_j@efn.org>
To:        Alexander Leidinger <Alexander@Leidinger.net>
Cc:        multimedia@freebsd.org
Subject:   Re: BSD video capture emulation question
Message-ID:  <20030711202218.GK35337@funkthat.com>
In-Reply-To: <20030711220709.3cdac33e.Alexander@Leidinger.net>
References:  <4339238.1057932704927.JavaMail.nobody@kermit.psp.pas.earthlink.net> <20030711172828.GI35337@funkthat.com> <20030711220709.3cdac33e.Alexander@Leidinger.net>

next in thread | previous in thread | raw e-mail | index | archive | help
Alexander Leidinger wrote this message on Fri, Jul 11, 2003 at 22:07 +0200:
> On Fri, 11 Jul 2003 10:28:28 -0700
> John-Mark Gurney <gurney_j@efn.org> wrote:
> 
> > For the most part, the kernel would just export many different device
> > nodes, one for each part of the card.  I've started work on a design
> > document.  It is VERY rough and incomplete, but I'll put it up.
> > 
> > http://people.FreeBSD.org/~jmg/videobsd.html
> 
> Did you looked at v4l and v4l2 (I haven't)? And at gstreamer and NMM
> (http://www.networkmultimedia.org/)? I listed the former ones to decide
> what they did wrong, so "our" API doesn't make the same mistakes, and
> the later ones to see what's needed at the application level (what kind
> of actions the API should allow). Maybe talking with some of the
> developers of those userland programs/libs will help, if it is of help:
> I can get in contact with the NMM people personally.

I need to look more at it, but for me, when writing the driver for
the Zoran, there was a lot of work that I was going to have to write
that would end up being Zoran specific that wouldn't be too hard to
make generic.  That is the underlying reason for this.  If we make it
easier for right drivers for stuff, we'll have more hardware support.
If/when this interface is completed, a halfway compentent hacker should
be able to pound out a new driver for a card in a couple of weeks.

Then with my recent work on USB and realizing that you can do some stuff
from userland (see the vid port for OV511 based USB cams), it makes
more sense to move more of the work to userland.

> > I'm still debating on how much smarts should be put into the kernel.
> > Part of me wants to do a good portion of it to prevent the user from
> > doing something stupid and damaging hardware (like setting two sources
> > to drive the clocks of the video bus at the same time).  But the more
> > I'm thinking about it, I want to do most if not all in userland.  This
> > would make it easier to support usb webcams and firewire devices easier
> > w/o the user of the library even knowing there was a difference.
> 
> We don't want a malicious program to destroy the hardware, do we?

Nor should you let untrusted users access to the device nodes.  If you
give permissions to run the apps, then you're responsible for the apps.

As far as hardware damage goes, I'm not an electrical expert on what
can happen when two drivers drive the same wire, but it could result
in a short that reders one or two chips broken, which of course would
make the card useless.

> > I'm not sure if I'll go so far as Windows has with being able to stick
> > n filters between the device and your output.  Adding this support
> > shouldn't be hard since it's a library and we can at a later date add
> > additional functions.
> 
> Think about those cards where you can put a video stream in and get a
> transformed video stream out (MPEG encoder). One end can be represented
> as a video sink and the other one as a video source (so you don't need a
> "filter", just connect sources with sinks).
> 
> Ideally you want to e.g. connect one video source (e.g. DV format from a
> via firewire connected cam) to the video sink of a MPEG encoder and the
> video source of the MPEG encoder should get written to disk and at the
> same time to the video sink of the TV-Out of the DVB-S card (MPEG
> decoder).
> 
> The API should be able to do as much of this in the kernel as possible,
> e.g. if I connect the DV video source to the MPEG video sink and I don't
> have some kind of "tee" in between, then the data shouldn't leave the
> kernel. And if I have a "tee" in between (as in the above example:
> writting to disk an to the DVD-S card) the data transfer between in
> kernel manageable entities should happen in the kernel.

I haven't really thought much on this part of the interface.  But this
shouldn't be hard to acheive with in the frame work.  Since most of
the good cards support DMA (if they don't then why worry about
performance?) it's a no-op to create a userland buffer that one card
dma's to, and you pass the buffer to the decoder card, and that card
dma's from system memory.

I also was going to write a contigmem device so that video drivers didn't
need to do their own contigmalloc.  Instead you'd open this device,
grab a chunk (via mmap), and pass it to the driver.  This would make
bt848's work w/o problems on a direct userland buffer w/o extra tricks
in the driver.

So, in short, a proper kernel interface, it would not be hard to have
the data "not leave the kernel".  You will have to suffer a context
switch when the data arrives and you notify the kernel that it can
start work on this.  It MAY be possible to do something special with
the kernel, but I would reserve that for a later date.

-- 
  John-Mark Gurney				Voice: +1 415 225 5579

     "All that I will do, has been done, All that I have, has not."



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20030711202218.GK35337>