Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 23 Jan 1999 10:16:05 +0000 (GMT)
From:      Terry Lambert <tlambert@primenet.com>
To:        dillon@apollo.backplane.com (Matthew Dillon)
Cc:        tlambert@primenet.com, dyson@iquest.net, hackers@FreeBSD.ORG
Subject:   Re: Error in vm_fault change
Message-ID:  <199901231016.DAA14533@usr08.primenet.com>
In-Reply-To: <199901230601.WAA36792@apollo.backplane.com> from "Matthew Dillon" at Jan 22, 99 10:01:08 pm

next in thread | previous in thread | raw e-mail | index | archive | help
>     I've reenabled John's low memory code in vm_fault, and commented
>     it as best as I could.  But I don't like it at all.
> 
>     Terry's idea about recycling pages within a vnode ( I think he 
>     meant vm_object ) is an interesting one, but I don't know how
>     we would determine the point at which a vm_object has too many 
>     resident pages.

I meant the vm_object_t associated with the vnode, and the limit as
enforced by an addition to the vnode pager.

This is very file oriented.  In general, the largest VM objects you
will have to deal with are file mappings.

You could also limit dirty data mappings the same way; however, this
leads to a couple of nasty problems.  The first is that if a dirty
page is not in core, it has to be in swap; the act of forcing it
out of core forces it into swap.  The second is that there is really
no choke point at which a callback on a new data page allocation can
result in a dirty data page being forced to swap.  The COW based
implementation is mildly difficult, needing to upcall across what
are, effectively, two stack frames (function calls, each with context),
but even if you did this, the new data page allocation issue is a bear.
It would involve, at a minimum, forcing a pageout in the page not
present fault handler.

I think that the second issue is a little specious; it's an argument
that boils down to "it's hard".  But the first argument is very, very
hard to address without bloating your swap significantly, or without
considering swap utilization before deciding about enforcement.  Down
this road lies madness.  There's no way to do this that wouldn't have
to be reexamined in terms of the tradeoff vs. minor variations in
processor architecture (even between Intel processor revisions), CPU
vs. memory bus multiplier, L2 wait states, main memory bus waits, and,
eventually, I/O bus speed for cards with memory mapped into the KVA
space.  As I said, madness; I think it would be impossible to get a
set of parameters that worked optimally on more than one piece of
hardware.  Trying to derive the attractors to make this parametric
in the first place would probably kill some poor hacker.

I think the only way to avoid the issue is to, well, avoid the issue,
and trust that swap is going to continue to be so hideously expensive
that it's worth taking the thrashing hit for intentionally bad programs
as a poor man's hysteresis limitation on bad behaviour, and just assume 
hat the datasize and memorysize limits will save you.  This assumption
might change if swap became essentially free, but the time to examine
the issue is then, not now.


					Terry Lambert
					terry@lambert.org
---
Any opinions in this posting are my own and not those of my present
or previous employers.

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199901231016.DAA14533>