Date: Thu, 28 May 2009 14:30:17 -0700 From: Alfred Perlstein <alfred@freebsd.org> To: Dag-Erling Sm??rgrav <des@des.no> Cc: Nate Eldredge <neldredge@math.ucsd.edu>, yuri@rawbw.com, freebsd-hackers@freebsd.org Subject: Re: Why kernel kills processes that run out of memory instead of just failing memory allocation system calls? Message-ID: <20090528213017.GX67847@elvis.mu.org> In-Reply-To: <86ljoig08o.fsf@ds4.des.no> References: <4A14F58F.8000801@rawbw.com> <Pine.GSO.4.64.0905202344420.1483@zeno.ucsd.edu> <4A1594DA.2010707@rawbw.com> <86ljoig08o.fsf@ds4.des.no>
next in thread | previous in thread | raw e-mail | index | archive | help
* Dag-Erling Sm??rgrav <des@des.no> [090527 06:10] wrote:
> Yuri <yuri@rawbw.com> writes:
> > I don't have strong opinion for or against "memory overcommit". But I
> > can imagine one could argue that fork with intent of exec is a faulty
> > scenario that is a relict from the past. It can be replaced by some
> > atomic method that would spawn the child without ovecommitting.
>
> You will very rarely see something like this:
>
> if ((pid = fork()) == 0) {
> execve(path, argv, envp);
> _exit(1);
> }
>
> Usually, what you see is closer to this:
>
> if ((pid = fork()) == 0) {
> for (int fd = 3; fd < getdtablesize(); ++fd)
> (void)close(fd);
> execve(path, argv, envp);
> _exit(1);
> }
I'm probably missing something, but couldn't you iterate
in the parent setting the close-on-exec flag then vfork?
I guess that wouldn't work for threads AND you'd have to
undo it after the fork if you didn't want to retain that
behavior?
thanks,
-Alfred
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20090528213017.GX67847>
