Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 21 May 2009 01:06:19 -0700
From:      perryh@pluto.rain.com
To:        yuri@rawbw.com, neldredge@math.ucsd.edu
Cc:        freebsd-hackers@freebsd.org
Subject:   Re: Why kernel kills processes that run out of memory instead of just failing memory allocation system calls?
Message-ID:  <4a150b7b.kwnuIl%2B%2BHgdJdRWU%perryh@pluto.rain.com>
In-Reply-To: <Pine.GSO.4.64.0905202344420.1483@zeno.ucsd.edu>
References:  <4A14F58F.8000801@rawbw.com> <Pine.GSO.4.64.0905202344420.1483@zeno.ucsd.edu>

next in thread | previous in thread | raw e-mail | index | archive | help
Nate Eldredge <neldredge@math.ucsd.edu> wrote:
> For instance, consider the following program.
<snip>
> this happens most of the time with fork() ...

It may be worthwhile to point out that one extremely common case is
the shell itself.  Even /bin/sh is large; csh (the default FreeBSD
shell) is quite a bit larger and bash larger yet.  The case of "big
program forks, and the child process execs a small program" arises
almost every time a shell command (other than a built-in) is executed.

> With overcommit, we pretend to give the child a writable private
> copy of the buffer, in hopes that it won't actually use more of it
> than we can fulfill with physical memory.

I am about 99% sure that the issue involves virtual memory, not
physical, at least in the fork/exec case.  The incidence of such
events under any particular system load scenario can be reduced or
eliminated simply by adding swap space.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4a150b7b.kwnuIl%2B%2BHgdJdRWU%perryh>