From owner-freebsd-current Fri Feb 6 20:54:24 1998 Return-Path: Received: (from majordom@localhost) by hub.freebsd.org (8.8.8/8.8.8) id UAA08503 for current-outgoing; Fri, 6 Feb 1998 20:54:24 -0800 (PST) (envelope-from owner-freebsd-current@FreeBSD.ORG) Received: from dingo.cdrom.com (dingo.cdrom.com [204.216.28.145]) by hub.freebsd.org (8.8.8/8.8.8) with ESMTP id UAA08450; Fri, 6 Feb 1998 20:54:12 -0800 (PST) (envelope-from mike@dingo.cdrom.com) Received: from dingo.cdrom.com (localhost [127.0.0.1]) by dingo.cdrom.com (8.8.8/8.8.5) with ESMTP id UAA03722; Fri, 6 Feb 1998 20:54:00 -0800 (PST) Message-Id: <199802070454.UAA03722@dingo.cdrom.com> X-Mailer: exmh version 2.0zeta 7/24/97 To: Terry Lambert cc: dyson@FreeBSD.ORG, mike@smith.net.au, abial@nask.pl, freebsd-current@FreeBSD.ORG, jkh@FreeBSD.ORG Subject: Re: Custom init(8) (and some ideas) In-reply-to: Your message of "Sat, 07 Feb 1998 04:34:09 GMT." <199802070434.VAA25379@usr05.primenet.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Date: Fri, 06 Feb 1998 20:53:59 -0800 From: Mike Smith Sender: owner-freebsd-current@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.ORG X-To-Unsubscribe: mail to majordomo@FreeBSD.org "unsubscribe current" > What is the compression table flush boundry? You *could* use the > gzip'ped file as a swap store, *if* you created pages out of the > file on the flush boundry, since you could re-decompress the needed > pages by restarting from that point. It's not quite that simple. From /usr/src/lib/libz/algorithm.doc: ---8<--- Literals or match lengths are compressed with one Huffman tree, and match distances are compressed with another tree. The trees are stored in a compact form at the start of each block. The blocks can have any size (except that the compressed data for one block must fit in available memory). A block is terminated when deflate() determines that it would be useful to start another block with fresh trees. (This is somewhat similar to the behavior of LZW-based _compress_.) ---8<--- You would have to scan the entire image to locate the compression blocks, which would be a chore. > This would require (basically) a gzip-pager. You would also need > to make an map (probably an RLE 0/1 bitmap) to know how many full > and partial pages each decompressed to, and handle the section > boundries (since they would probably not decompress to even page > boundries). A table of region lengths would be more compact, but perhaps slower to traverse. This would be another one of those fun-but-distracting projects for a relatively new kernel hacker. 8) -- \\ Sometimes you're ahead, \\ Mike Smith \\ sometimes you're behind. \\ mike@smith.net.au \\ The race is long, and in the \\ msmith@freebsd.org \\ end it's only with yourself. \\ msmith@cdrom.com