Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 04 Aug 1995 14:40:19 -0700
From:      David Greenman <davidg@Root.COM>
To:        jiho@sierra.net
Cc:        freebsd-questions@freefall.cdrom.com
Subject:   Re: 2.0.5 Eager to go into swap 
Message-ID:  <199508042140.OAA01991@corbin.Root.COM>
In-Reply-To: Your message of "Fri, 04 Aug 95 13:27:44 -0800." <199508042124.AA04253@diamond.sierra.net> 

next in thread | previous in thread | raw e-mail | index | archive | help
>>    We use the Berkeley malloc by default which causes power of 2 allocations
>> to allocate twice as much memory as is needed. It's a function of its design -
>> it takes a few bytes more than it needs for the allocation, and the allocation
>> buckets are power of 2. So a request for a power of 2 amount causes the
>> allocation to fall into the next bucket (which is twice as large).
>
>Now here's a point even I can understand.  Thank you for the 
>explanation.  And it's easy to fix--just use a different malloc().
>
>This seems to be another example of how BSD was optimized for the VAX 
>as a multiuser system.  About the only way to speed a VAX up was to 
>add RAM, so whenever Berkeley had a tradeoff to make of speed versus 
>space, they optimized for speed (however trivial it may seem now) at 
>the expense of RAM space.

   Actually, I think it is far more a matter of simply not considering that
many applications request power of two sizes. The "extra bytes" could just as
easily have been allocated seperately.

>>    It is escentially blind. The policy of which pages to reclaim is based on
>> frequency of page usage, not on how many people have it mapped.
>
>This I still don't quite understand.  I know what you mean here, once 
>the decision is made to swap the swapper doesn't care whether a given 
>page is shared or not, and there's no reason why it should.  That 
>makes perfect sense.
>
>I was thinking of the higher-level decision to swap at all, based on how
>many total pages are physically occupied.  How is this decision reached?

   It happens when the system runs out of free memory. Note that we use most
free pages for file data caching, and that file I/O can cause a very small
amount of paging - but this is necessary to get rid of infrequently used pages.
The goal is to reduce disk I/O as much as possible, and if this means caching
file data instead of VM pages, then that's what we do. The algorithm is very
VM page weighted, and only reclaims pages slowly for file caching. This is to
prevent large amounts of file I/O from causing excessive paging.

>The issue arose because in looking at the "run time set size" figures 

   RSS stands for "resident set size". The RSS of each process is a
combination of all pages - shared or not. This makes the number unuseful
for any sort of precise estimation of memory usage.

-DG



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199508042140.OAA01991>