Date: Fri, 22 May 2009 00:33:59 -0700 From: Alfred Perlstein <alfred@freebsd.org> To: Yuri <yuri@rawbw.com> Cc: Nate Eldredge <neldredge@math.ucsd.edu>, freebsd-hackers@freebsd.org Subject: Re: Why kernel kills processes that run out of memory instead of just failing memory allocation system calls? Message-ID: <20090522073359.GJ67847@elvis.mu.org> In-Reply-To: <4A1594DA.2010707@rawbw.com> References: <4A14F58F.8000801@rawbw.com> <Pine.GSO.4.64.0905202344420.1483@zeno.ucsd.edu> <4A1594DA.2010707@rawbw.com>
next in thread | previous in thread | raw e-mail | index | archive | help
* Yuri <yuri@rawbw.com> [090521 10:52] wrote: > Nate Eldredge wrote: > >Suppose we run this program on a machine with just over 1 GB of > >memory. The fork() should give the child a private "copy" of the 1 GB > >buffer, by setting it to copy-on-write. In principle, after the > >fork(), the child might want to rewrite the buffer, which would > >require an additional 1GB to be available for the child's copy. So > >under a conservative allocation policy, the kernel would have to > >reserve that extra 1 GB at the time of the fork(). Since it can't do > >that on our hypothetical 1+ GB machine, the fork() must fail, and the > >program won't work. > > I don't have strong opinion for or against "memory overcommit". But I > can imagine one could argue that fork with intent of exec is a faulty > scenario that is a relict from the past. It can be replaced by some > atomic method that would spawn the child without ovecommitting. vfork, however that's not sufficient for many scenarios. > Are there any other than fork (and mmap/sbrk) situations that would > overcommit? sysv shm? maybe more. -- - Alfred Perlstein
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20090522073359.GJ67847>