Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 27 Feb 2001 17:28:56 +0900
From:      "Daniel C. Sobral" <dcs@newsguy.com>
To:        Matt Dillon <dillon@earth.backplane.com>
Cc:        Archie Cobbs <archie@dellroad.org>, Warner Losh <imp@village.org>, Peter Seebach <seebs@plethora.net>, freebsd-hackers@FreeBSD.ORG
Subject:   Re: Setting memory allocators for library functions.
Message-ID:  <3A9B6548.E298F857@newsguy.com>
References:  <200102260529.f1Q5T8413011@curve.dellroad.org> <200102260628.f1Q6SYX29811@earth.backplane.com> <3A9A0A9A.E4D31F97@newsguy.com> <200102261755.f1QHtvr34064@earth.backplane.com> <3A9AAB02.793A197A@newsguy.com> <200102261940.f1QJeJi38115@earth.backplane.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Matt Dillon wrote:
> 
>     Said application was poorly written, then.  Even on solaris if you

The only reason the application was "poorly written" is the overcommit
architecture.

>     actually run the system out of memory you can blow up other unrelated
>     processes.  To depend on that sort of operation is just plain dumb.

Not at all. You can fill all memory on Solaris and it will work just
fine. Go ahead and try it, if you doubt me.

> :I'll give you one more example. Protocol validation. It is often
> :impossible to test all possible permutations of a protocol's dialog, but
> :being able to go as deep as possible on execution tree and then, when
> :you go out of memory, giving up on that path, backing down and
> :continuing elsewhere let you get a partial validation, which is not
> :enough to prove a protocol is correct but might well be enough to prove
> :it is incorrect. This is a real application, and one in which an out of
> :memory condition is not only handled but even expected.
> 
>     This has nothing to do with memory overcommit.  Nothing at all.  What
>     is your definition of out-of-memory?  When swap runs out, or when the
>     system starts to thrash?  What is the point of running a scientific

When a memory request cannot be satisfied. Swap runs out, it would seem.

>     calculation if the machine turns into a sludge pile and would otherwise
>     cause the calculation to take years to complete instead of days?

It doesn't trash. The memory is filled with backtracking information.
Memory in active use at any time is rather small.

>     You've got a whole lot more issues to deal with then simple memory
>     overcommit, and you are ignoring them completely.

Not at all. I'm giving you an example of an application which depends on
non overcommitting and _works_ on such architectures.

> :And, of course, those whose infrastructure depends on a malloc()
> :returning NULL indicating the heap is full will not work on FreeBSD.
> :(<sarcasm>You do recall that many of these languages are written in C,
> :don't you?</sarcasm>)
> 
>     Bullshit.  If you care, a simple wrapper will do what you want.  Modern
>     systems tend to have huge amounts of swap.  Depending on malloc to

Huge amounts of swaps is not a given. You are assuming a hardware setup
to fit your theory.

>     fail with unbounded resources in an overcommit OR a non-overcommit case
>     is stupid, because the system will be thrashing heavily long before it
>     even gets to that point.

Allocating memory does not trash the system. A rather large number of
pages in active use does, and this is not necessarily the case at all.

>     Depending on malloc() to fail by setting an appropriate datasize limit
>     resource is more reasonable, and malloc() does work as expected if you
>     do that.

I completely agree that setting datasize limit is more reasonable, but
that does not prevent an application from being killed if the system
does run out of memory.

I think that if the system runs out of memory, you don't have enough
memory. That datasize limits must be used to ensure desired behavior.
But this is a _preference_. On Solaris, depending on non overcommitting
of memory is possible, and some do prefer it that way.

> :It has everything to do with overcommit. In this particular case, not
> :only there _is_ something to do when the out of memory condition arise,
> :but the very algorithm depends on it arising.
> 
>     It has nothing to do with overcommit.  You are confusing overcommit
>     with hard-datasize limits, which can be set with a simple 'limit'
>     command.

Unless I want it to grab all available memory.

> :
> :Garbage Collection: Algorithms for Automatic Memory Management, Richard
> :Jones and Rafael Lins. Bullshit is what you just said.
> 
>     None of which requires overcommit.  None of which would actually
>     work in a real-world situation with or without overcommit if you do
>     not hard-limit the memory resource for the program in the first place.

If you ever bother to check the reference, you'll see that many of these
algorithms were implemented and used in real world systems.

>     You are again making the mistake of assuming that not having overcommit
>     will magically solve all your problems.  It doesn't even come close.

No, *YOU* keep insisting that we assume that. The assumption is rather
different: *with* overcommit the problems *cannot* be solved (except by
using datasize limit, which I think it's entirely reasonable, but some
don't).

>     You think these garbage collection algorithms work by running the
>     system out of VM and then backing off?  That's pure nonsense.

I don't "think" anything. I'm reporting facts. Many algorithms do work
that way, whether you think they are non-sense or not.

-- 
Daniel C. Sobral			(8-DCS)
dcs@newsguy.com
dcs@freebsd.org
capo@kzinti.bsdconspiracy.net

	Acabou o hipismo-arte. Mas a desculpa brasileira mais ouvida em Sydney
e' que nao tem mais cavalo bobo por ai'.

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3A9B6548.E298F857>