Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 30 Apr 1996 12:09:13 -0700
From:      Mitchell Erblich <merblich@ossi.com>
To:        freebsd-fs@freebsd.org, pvh@leftside.its.uct.ac.za, merblich@ossi.com, DARREND@novell.com
Subject:   Re: Compressing filesystem: Technical issues - Reply
Message-ID:  <199604301909.MAA07811@guacamole.ossi.com>

next in thread | raw e-mail | index | archive | help
Darren,

	I would almost agree with you, but the fs only compresses items after
a specified time frame. Which means that the fs objects must be first accessed
in uncompressed format, then compressed after the object is put onto the disk.
Then the question of increased seeks between file blocks. If we increase the
number of seeks that decreases our rate from 40MB/sec to 5MB/sec. We have better
have a 8 to 1 compression rate just to stay even for your assumption to be true.

However, what we really need to see is what happens under the worst type of
accesses, random. For there would be no read-ahead or write behind algorithm
to help us. After the fs object is in compressed format and is now demand accessed
on the disk, I believe our latency to access the fs block will outweigh any decrease
or increase in perceived bandwidth by transfering less data (because it is in
compressed format).

Another item in consideration is that our compressed fs block data is not equal
to the uncompress physical memory block data. However, this just leads to more
complication.

So, back to my original philosophy of contigous files with extememly large block
sizes. If a fs object has a significant block size, a triple indirect access can be
avoided in very large files. This eliminates a fs access and is a speed improvement.
Sun Microsystems did not impliment a triple indirect on SunOS 4.1.X because of this.
(I have no idea what they are doing for 64bit fs accesses on their Ultras.)

Mitchell Erblich : mrblich@ossi.com
Fujitsu Open Systems Solutions, Inc.
Senior Software Engineer
PS: I speak for myself and not my company.
---------------------------------------------------------------------
 

> From owner-freebsd-fs@freefall.freebsd.org Mon Apr 29 14:19 PDT 1996
> Date: Mon, 29 Apr 1996 14:27:38 -0600
> From: DARREND@novell.com (Darren Davis)
> To: freebsd-fs@freebsd.org, pvh@leftside.its.uct.ac.za, merblich@ossi.com
> Subject: Re: Compressing filesystem: Technical issues - Reply
> X-Loop: FreeBSD.org
> 
> >>> Mitchell Erblich <merblich@ossi.com>  4/29 12:51pm >>>
> >Peter and et al,
> >
> >	I would taker in consideration what is the typical type of
> >	file would be compressed and what is the benefit vs the tradeoffs.
> Disks
> >	are already too slow, isn't the overhead of just uncompressing
> the blocks,
> >	on demand in a random access pattern add a delay to the fs object.
> However,
> >	I will proceed with the assumption that this approach may have
> some merit.
> 
> Actually, systems tend to be I/O bound more than compute bound.
> By compressing a file, you potentially are trading off I/Os for CPU
> cycles (A good tradeoff I believe).  Your I/Os will be smaller, but the
> CPU must expend cycles to uncompress it.  I have seen on some
> systems with fast CPUs and increase in system performance due to
> the smaller I/Os involved with compressed files.
> 
> Darren R. Davis
> Senior Software Engineer
> Novell, Inc.
> 
> 



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199604301909.MAA07811>