Date: Thu, 22 Feb 1996 01:19:00 -0500 (EST) From: "matthew c. mead" <mmead@Glock.COM> To: james@miller.cs.uwm.edu (Jim Lowe) Cc: hackers@freebsd.org, multimedia@star-gate.com Subject: Re: frameserv and clients Message-ID: <199602220619.BAA16573@Glock.COM> In-Reply-To: <199602212226.QAA07493@miller.cs.uwm.edu> from "Jim Lowe" at Feb 21, 96 04:26:10 pm
next in thread | previous in thread | raw e-mail | index | archive | help
Jim Lowe writes: > > Hmm. I have vic on my system, but have not been able to get its > > quickcam patches to actually take pictures from my quickcam. Unfortunately, > > Virginia Tech won't pipe MBONE out to the BEV Ethernet routers, so I can't get > > MBONE without someone tunnelling, and the closest person is, you guessed it, > > Virginia Tech. > I would probably try and debug the qcam stuff to make it work. It was because of the way it handles the white balance, brightness, and contrast settings. Sujal and I got it working tonight, sort of. You have to set the controls with sliders each time you use the camera. There should be a way to hardcode these in a config file if you know what your camera works best at in your lighting situation. Also, another note - we couldn't get vic to do two way. We don't have multicast between us, so we would do vic glock.com/5000 at which point I could see him but he couldn't see me. So we ran two copies each, one connecting to his host, one to mine. That worked. Eventually for some reason one of the vic's on each of our ends would crash. Then the first vic worked bidirectionally. Odd! > You can run all the mbone tools vic, vat, etc... unicast as well as > multicast. You can also run them on your local network as multicast. > You will have the same restrictions if you use vic or write your > own application. Right, I know. > > Hmm. Ok, so when vic runs it is actually two parts? It's a grabber > > program that knows how to snag frames, and a client program which knows how to > > get them from the grabber program? > Yes, you can look at it that way. Vic grabs frames from video capture > cards on various platforms. It has various capabilities including hooks > for hardware mpeg, jpeg, etc... encoding. It can transmit these packets > to the network with unicast or multicast. It is also a receiver of > this information. Vic knows how to decode RTP packets and display them > in an X11 window. I guess you could call it a full duplex video application > program. The nice thing about vic is that it already uses IP and RTP > so you don't have to reinvent the API. You can use RTP as your video API... Ack! I guess I didn't make myself understandable about why I want to build this framserver. The idea is to make an intermediate interface to video grabbing hardware that is fast and can allow multiple devices to access that hardware all at the same time. Sujal and I talked a little about this tonight, and he thought that the kernel driver qcam0 could be made to accept multiple readers to achieve the same thing. Any ideas on this? Should I quit development? > Vic (as well as your frame grabber) will require exclusive access to > the device to grab the frames. You want to put the frames on the > network so many things can read the same frames. Vic does this. > You can use multicast or unicast to do this. I suppose you can also > use the local loopback device. Another method would be to use > shared memory -- but then you would need a shared memory network > extension for your machine (I think I saw one of these somewhere-mnfs?). I understand. The thing is, this frameserv can sit in the background and not eat CPU until it's called, and it's the only thing that will have exclusive access to the device. To get data from the device, other things connect to the frameserv, which knows how to handle multiple devices! :-) > The major problem with grabbing frames is the amount of bandwidth > things comsume. If you have a quickcam (greyscale device) with a > small frame size (160x120) it doesn't consume much bandwidth. > 160x120, greyscale, 1 frame/second uses 19.2 kbytes/second. > 320x240, greyscale, 1 frame/second uses 76.8 kbytes/second. > 640x480, greyscale, 1 frame/second uses 307.2 kbytes/second. > Multiply by 30 for real-time video (30fps), then by either > 2 for yuv 4:2:2 encoding or 4 for true color (possibly 3). > You will note that these numbers get real big real fast. One will > need some sort of compression algorithm to deal with this. Vic > already has h.261 and nv encoding and has been designed to deal with > hardware compression. You can easily add whatever encoding algorithm > you wish to vic. And it outputs something we all know about, namely > RTP. Right - but the best you'll get out of the quickcam is like 15fps at a low resolution. This amount of data is not enough to saturate a unix domain socket connection. > My only point, and feel free to ignore me, is that a network frame > grabber is already available. It has all the tools one needs to do > everything you described and it does much more. By developing RTP > tools to work with it, you don't need to reinvent the wheel and there > may be other uses for tools you invent other than the ones originally > intended. Your point is very well taken! My point is this: to have vic become the access method for framegrabbing of any sort on a computer, it's gotta sit around all the time. It's not geared for this. It's an X application with lots of bells and whistles. What I'm interested in doing is providing a minimalistic framework for accessing the quickcam camera and providing that data to multiple clients on the same host via unix domain sockets. Then you've got a frameserver that sits around doing nothing unless someone connects, and if so, it starts taking pictures... Does this make sense? Am I totally misunderstanding vic? If it operates in modes I have not used I suppose I could be really off! :-) -matt -- Matthew C. Mead mmead@Glock.COM http://www.Glock.COM/~mmead/
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199602220619.BAA16573>