Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 4 Aug 1997 18:17:47 +1000
From:      David Dawes <dawes@rf900.physics.usyd.edu.au>
To:        Michael Smith <msmith@atrad.adelaide.edu.au>
Cc:        garbanzo@hooked.net, current@FreeBSD.ORG
Subject:   Re: Some thoughts and ideas, and quirks
Message-ID:  <19970804181747.18415@rf900.physics.usyd.edu.au>
In-Reply-To: <199708040744.RAA20987@genesis.atrad.adelaide.edu.au>; from Michael Smith on Mon, Aug 04, 1997 at 05:14:57PM %2B0930
References:  <Pine.BSF.3.96.970804002308.1311C-100000@zippy.dyn.ml.org> <199708040744.RAA20987@genesis.atrad.adelaide.edu.au>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, Aug 04, 1997 at 05:14:57PM +0930, Michael Smith wrote:
>Alex stands accused of saying:

>> > > more descriptive.  On the other hand, I noticed that the x packages
>> > > installed quicker (they're in .tgz format not split up tarballs) off my
>> > > fat partition than did the huge tarballs 100+k/s faster on average.
>> > 
>> > The speed reported by the installer is the rate of reading on the source
>> > file; it has nothing to do with whether the data is chunked or not.
>> 
>> Well, it was off the same local FAT16 partition, so it seems to me that
>> the installer is doing less work.  Could just be me. Those 240k chunks
>> still bug me.  They remind me of Slackware Linux *shudder*.
>
>It could just be that the bindist is compressed more, and your output
>stream is the limiting factor.  If you're using an IDE disk and a
>moderately fast CPU this is not unrealistic.

I don't think it has anything to do with the tarballs being split.
I think the limiting factor is the average size of the files being
created during the install.  Smaller average size means more files are
being created per kb, which is slower.  Anyway, that's the correlation
I've noticed when installing packages.

David



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?19970804181747.18415>