Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 11 Sep 2004 00:02:11 +0200
From:      Willem Jan Withagen <wjw@withagen.nl>
To:        Geert Hendrickx <geert.hendrickx@ua.ac.be>
Cc:        oceanare@pacific.net.sg
Subject:   Re: spreading partitions over multiple drives
Message-ID:  <41422463.9090303@withagen.nl>
In-Reply-To: <20040910160051.GA24152@lori.mine.nu>
References:  <20040831133551.GA86660@lori.mine.nu> <4134B312.8030309@pacific.net.sg> <1093958674.680.2.camel@book> <20040831183908.GA87694@lori.mine.nu> <4141AA6A.2070802@withagen.nl> <20040910160051.GA24152@lori.mine.nu>

next in thread | previous in thread | raw e-mail | index | archive | help
Geert Hendrickx wrote:

>On Fri, Sep 10, 2004 at 03:21:46PM +0200, Willem Jan Withagen wrote:
>  
>
>>I would expect a bigger system to cache just about all file access 
>>during 'make buildworld'.
>>Even when building things with -j 64 I can not get my dual-opteron 1Gb 
>>system  get without free pages.
>>And as such most files will only be read once, and object output will be 
>>"slowly" synced on the disks.
>>Disk I/O rearly becomes the bottleneck, most of the time I'm missing raw 
>>CPU cycles.
>>And I have everything on 1 large 200Gb disk.
>>    
>>
>
>Ok so adding more RAM may be more useful than an extra harddisk?  Maybe
>I could even put /tmp or /usr/obj on a RAM-disk?  A fully built /usr/obj
>is about 350Mb.  
>  
>
It would be wise to consider these things rather different. It used to 
be black magic in the time of Sun Files and/or work servers and  I guess 
it has not changed all that much.

Since there is now a unified buffer the disctinction between the 2 has 
sort of faded, but in general you need as much memory as you need for 
running concurrent application where you sum the size of the workset of 
each of the applications. Hence you prevent swapping and/or paging as 
much as possible.
New in the equation is that that same memory know could also hold the 
file cache. (This used to be a hard configurable limit, set 
before/whilest booting not to be changed after that)
So you'll require extra memory for this as well.

How much?? Again that depends on you concurrent applications and the 
frequency they return to read/write certain files.... Binaries used to 
have something like a sticky bit, so that the OS would know that these 
executables were in high demand, e.g. ls(1) on a multi-user server. And 
thus it would be kept a little longer in the file-cache....

Memory is cheap, I payed the other day something like < 100 Euro for 
512Mb. I chose to use more memory versus buying faster processors. Next 
thing I would do is increase the number of spindels (aka disks), 
especially on the boxes that have losts of access from different clients 
for very different things.

--WjW



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?41422463.9090303>