Date: Fri, 20 Sep 1996 11:01:58 -0700 (MST) From: Terry Lambert <terry@lambert.org> To: michaelh@cet.co.jp (Michael Hancock) Cc: terry@lambert.org, freebsd-hackers@FreeBSD.org Subject: Re: thread stacks and protections (was Re: attribute/inode caching) Message-ID: <199609201801.LAA02736@phaeton.artisoft.com> In-Reply-To: <Pine.SV4.3.93.960920211346.25361B-100000@parkplace.cet.co.jp> from "Michael Hancock" at Sep 20, 96 09:28:22 pm
next in thread | previous in thread | raw e-mail | index | archive | help
> > Maybe I don't understand the question, or maybe you aren't asking in > > the context of page-anonymity based protections, which are statistical > > protections using MMU faulting rather than domain crossing protections > > using instruction faulting. > > Your reference to thread stacks being able to grow is why I brought up the > kernel stack being in the u-area. Where would kernel thread stacks go if > you wanted them to be able to grow dynamically? Ah. A kernel thread has a VM disctinct from other threads. Therefore they would go in the same (virtual) place in each thread. John alluded to this in his posting about making the kernel stack dynamic. The only thing that needs to change between kernel-dynamic for single entrancy and thread-dynamic for multiple entrancy is the location of the stack pointer that gets referenced. This assume the kernel can handle a guard page fault at SPL, etc.. It means a preallocation so that the page insertion can take place, and a 4k insertion stack (one page) for handling the fault when it occurs in kernel space. This would require that the processor honor the WP bit if it's an Intel processor; for 386's, you would have to choose a "largest reasonable amount" and just live with it. A much bigger problem is shared heap, unshared stack in a single pmap, which is the case for user space threads, and is why the POSIX threading model specifies you pass the stack to the thread creation as a preallocated entity. You can break this up into "auto-grow" zones using a guard page, but then the mapping can only grow to the point it intersects another zone. Ie: I hit the guard page, I fault, I add a new page, I move the guard page down one page, I continue -- but I've reduced the space to the next area by one page because I've divided up the available space to get my mappings into the same adress space map. The "proper" soloution is *probably* to fragment the map instead, where one thread has a slightly different VM space than another -- the difference being the stack mapping. So mapped text objects (the program and shared library code) and mapped heap objects (the program data and shared library data) remain the same from thread to thread, but the page mapping and guard page mapping for each thread is the only stack mapping for the given process. This implies a heavier-than-expected mapping overhead. Another alternative is a hybrid approach: You zone for some amount, and then after you exceed your zone for a given thread, you engage in pmap changes. This "punishes" threads with "excessive" stack use, while leaving other threads unadulterated. It's probably the correct approach, if the stack can be made arbitrarily large. > > I think John Dyson's response is best: it can be implemented (I wouldn't > > say it was as trivial to do as John implies, but then John is a VM > > guy and I am an FS guy), but we need to make sure that it's the right > > thing being implemented. > > I think John meant that the kernel stack can easily be moved somewhere > else as I was talking about an interim non-smp step. BTW, an interim step > doesn't sound necessary after listening to John's description of the > flexibility already enabled in the current framework, unless people really > wanted more kernel stack then there is now and the tradeoffs were > reasonable. I think the 386 not honoring the WP bit in protected mode (so you can't get a stack-grow fault oon your guard page) is a bigger stumbling block to implementing straight in without an interim step for 386's to fall back to. 8-(. Limits of hardware... Terry Lambert terry@lambert.org --- Any opinions in this posting are my own and not those of my present or previous employers.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199609201801.LAA02736>