Date: Sat, 8 Nov 1997 04:36:38 +0000 (GMT) From: Andrew Gordon <arg@arg1.demon.co.uk> To: Luigi Rizzo <luigi@labinfo.iet.unipi.it> Cc: multimedia@FreeBSD.ORG Subject: Re: Teletext decoding with the Hauppauge... Message-ID: <Pine.BSF.3.91.971108033931.16565C-100000@server.arg.sj.co.uk> In-Reply-To: <199711071724.SAA27226@labinfo.iet.unipi.it>
next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, 7 Nov 1997, Luigi Rizzo wrote: > > > The SAA5246 can capture up to 8 pages simultaneously, > > > > Actually, only 4 pages if you want the Fastext links and page CRC. > > Right. In fact, since you seem to know more than me on this, do > you know exactly how big is the RAM on the hauppauge card ? From > the name, it really seems a 32K or 64K chip, and I was curious if > the additional memory was used in some way. My card is stuck in a machine with a lid on just now, but the SAA5249 can only address 8Kbyte, and from what I remember of the card, the RAM is just connected to the SAA5249 in a straightforward manner. Note that 8Kbyte == 64Kbit so this probably agrees with the markings you saw. The memory is used rather inefficiently if reception of the extension packets is enabled - although there is typically only one row 27 transmitted these days (because TV manufacturers have decided on 4 coloured buttons for Fastext links - when I was first working on this stuff back in 1982, the BBC had in mind alphabetic keyboards and 26 links labeled A to Z), the SAA5246 allocates space for four row 27s and 14 row 26s, plus row 30 which isn't actually part of the page at all. > Probably it will be necessary to keep the raw data in (virtual) > memory, doing only the minimum amount of work which is necessary > to identify the pages, and do the conversion to GIF only on demand. Certainly it would be very wasteful to convert all pages to GIF for storage (the GIF is typically 10 to 20 times bigger than the raw page data). > Also it might be necessary to implement something like a log-structure > for holding pages so as to avoid random accesses to the memory > space.. At the moment, I am assuming you have enough real memory for all the pages (about 3Mbyte per channel, I believe). I mmap() the file containing the page data and just write into it, with an index at the front to locate pages. One possibility would be hold a checksum/signature for each page in the index, so that you could detect pages that haven't changed and so avoid touching the memory containing that page image. This would trade CPU cycles against memory usage (unless you consider the page checksum good enough for this purpose - but it's only 16 bit, and I don't think that gives enough uniqueness to be safe). > I started thinking at this teletext problem like a simple thing, > but it is really tricky if you want to do it with high efficiency! Yes, and there are also some interesting issues with error handling: If you have an old page with a good checksum and a newly-received page with bad checksum, which one do you keep? Specially if the 'good' one is very old? What do you do if you get a row with un-correctable errors in the MRAG? Aborting all current page receptions is the 'safe' thing to do, avoiding the behaviour I sometimes see on my TV, where the displayed page contains some rows from one page and some rows from another (due to row 0 having been corrupted by noise), but this would probably give you very few pages at all if reception conditions were poor. If you just throw away the bad row, you run the risk of mixed/wrong pages being displayed. Alternatively, you could keep track of which rows you have received and discard duplicate receptions of the same row, and possibly piece together a good page from two separate receptions in each of which some of the rows got corrupt. But that is a lot of processing to do, and only worth the bother if reception is very poor...
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.BSF.3.91.971108033931.16565C-100000>