Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 3 Apr 95 15:58:05 MDT
From:      terry@cs.weber.edu (Terry Lambert)
To:        nate@trout.sri.MT.net (Nate Williams)
Cc:        davidg@Root.COM, jkh@violet.berkeley.edu, hackers@FreeBSD.org
Subject:   Re: any interest?
Message-ID:  <9504032158.AA08980@cs.weber.edu>
In-Reply-To: <199504031749.LAA02762@trout.sri.MT.net> from "Nate Williams" at Apr 3, 95 11:49:07 am

next in thread | previous in thread | raw e-mail | index | archive | help
> > Remember that with an overcommit architecture, failure to acquire
> > needed swap means some process dies, and it's not necessarily
> > the process that caused you to run out; it's pretty much any
> > process (that's actually doing something) at random.
> 
> Not usually.  Almost always the process that gets wiped out is the
> process which is growing constantly or one that was just started.  Only
> in rare cases is it a long running system process you don't want wiped
> out.
> 
> Before you go off and start arguing about it, this statement is made
> from *experience*, so I can say with some assurance that I believe it to
> be true no matter what you try to say otherwise.  Experience never lies.

I'd argue that killing "the process which is growing constantly" (I don't
happen to keep that type of thing around -- policy of mine) is pretty
much more random than killing the process that was just started and ate
the last of the swap.

>From *experience* on AIX (and on a FreeBSD system where processes which
grow constantly are administratively prohibited from running 8-)), the
process that gets killed is either the process you are trying to start
being killed immediately because it could not get sufficient data space,
OR *any* poor process that triggers a copy-on-write fault (any fork()
but delayed/no exec() process could do this without "growing constantly").

Typically, on AIX, it's sub-shells for shell scripts, and (for me) the
"ps" command as I try and find processes with large images to murder
manually before something I care about that uses forks to do its
thing (like AMD or inetd) dies.

Without a large statistical set to predict by, this is more or less the
same thing as "random".  Reminds me of the Berkely backgammon that
cheated because in the limit incredible skill and incredible luck are
functionally equivalent, so the programmer opted for writing in
"incredible luck" because it was the less difficult task... just like
my use of "random" is less difficult than specifying exactly what will
be killed when for a particular configuration.


					Terry Lambert
					terry@cs.weber.edu
---
Any opinions in this posting are my own and not those of my present
or previous employers.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?9504032158.AA08980>