From owner-freebsd-hackers Mon Jun 17 21:17:29 1996 Return-Path: owner-hackers Received: (from root@localhost) by freefall.freebsd.org (8.7.5/8.7.3) id VAA23780 for hackers-outgoing; Mon, 17 Jun 1996 21:17:29 -0700 (PDT) Received: from dyson.iquest.net (dyson.iquest.net [198.70.144.127]) by freefall.freebsd.org (8.7.5/8.7.3) with ESMTP id VAA23774 for ; Mon, 17 Jun 1996 21:17:26 -0700 (PDT) Received: (from root@localhost) by dyson.iquest.net (8.7.5/8.6.9) id XAA01663; Mon, 17 Jun 1996 23:17:06 -0500 (EST) From: "John S. Dyson" Message-Id: <199606180417.XAA01663@dyson.iquest.net> Subject: Re: vfork cow? To: sef@kithrup.com (Sean Eric Fagan) Date: Mon, 17 Jun 1996 23:17:06 -0500 (EST) Cc: hackers@FreeBSD.org, michaelh@cet.co.jp In-Reply-To: <199606180324.UAA04977@kithrup.com> from "Sean Eric Fagan" at Jun 17, 96 08:24:30 pm X-Mailer: ELM [version 2.4 PL24 ME8] MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: owner-hackers@FreeBSD.org X-Loop: FreeBSD.org Precedence: bulk > > Keith or Kirk mentioned, at one point, possibly going back to the old > semantics; this is useful for large-memory processes, depending on the > implementation. (John and David just did some work to improve pmap_copy, > which helps address this issue. However, let's get a few processes with > 2GBytes of address space active, and see how well it does ;).) > I agree with those concerns, and one of the things on my list is to fully support shared address spaces (to make some thread libs work better.) Once we implement that, it should be fairly easy to fully implement the VM semantics of vfork(2). There is a serious problem with pmap_copy in exactly the senario that you describe. Pmap_copy can take quite a while for large processes, and it would probably be a good idea to inhibit it for very large processes (isn't it likely that if a large process does a fork, it will soon do an exec?) My guess is that it would be best to limit pmap_copy in some way -- any ideas for a reasonable policy? I am thinking that perhaps we could limit it to .text+.data+.bss+stack? (Avoiding the malloced/sbrk'ed region, and perhaps limiting the amount of mmaped shared libs to be pmap copied.) This will keep the benchmarks running fast, and not slow us down for large procs that are likely just going to fork. Anyone know any good heuristics? John