From owner-cvs-all@FreeBSD.ORG Thu Jul 10 15:23:12 2003 Return-Path: Delivered-To: cvs-all@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 28EE137B409 for ; Thu, 10 Jul 2003 15:23:12 -0700 (PDT) Received: from relay.pair.com (relay.pair.com [209.68.1.20]) by mx1.FreeBSD.org (Postfix) with SMTP id A085543FD7 for ; Thu, 10 Jul 2003 15:23:08 -0700 (PDT) (envelope-from silby@silby.com) Received: (qmail 20225 invoked from network); 10 Jul 2003 22:23:07 -0000 Received: from niwun.pair.com (HELO localhost) (209.68.2.70) by relay.pair.com with SMTP; 10 Jul 2003 22:23:07 -0000 X-pair-Authenticated: 209.68.2.70 Date: Thu, 10 Jul 2003 17:22:48 -0500 (CDT) From: Mike Silbersack To: David Schultz In-Reply-To: <20030710182436.GA6484@HAL9000.homeunix.com> Message-ID: <20030710171542.E1451@odysseus.silby.com> References: <200307080457.h684vRM7009343@gw.catspoiler.org> <20030708004340.T6733@odysseus.silby.com> <20030710182436.GA6484@HAL9000.homeunix.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII cc: "Alan L. Cox" cc: Don Lewis cc: src-committers@FreeBSD.ORG cc: cvs-all@FreeBSD.ORG cc: cvs-src@FreeBSD.ORG Subject: Re: cvs commit: src/sys/kern subr_param.c sys_pipe.c src/sys/sys pipe.h X-BeenThere: cvs-all@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: CVS commit messages for the entire tree List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 10 Jul 2003 22:23:12 -0000 On Thu, 10 Jul 2003, David Schultz wrote: > That would alleviate the KVA pressure, since the mapping would be > very temporary and you could even get away with just a single > page. However, it would still tie up the associated physical > memory until the pipe is read, which may not be soon at all. Is > there a reason for the memory to be wired, other than that the > data is easier to track down while the sending process' PTEs are > still there? I would expect that you could instead just look up > the appropriate vm_object and lazily fault in the appropriate pages > on the receiver's side, modulo a few details such as segfault handling. > But perhaps I'm missing something... I had thought the same thing, but then I realized that the wiring isn't a big deal: In the "normal" case, the pipe data would be stored in pageable kernel memory. In the "fast" case, we wire the pipe data down, but don't use any additional memory. Hence, we're not _really_ wasting any physical memory in the fast case; the only point where that wired memory would matter is if the machine was swapping like mad, but since we now have a limit on the amount of memory that can be wired, that won't be a significant problem. As a result, I've come to the conclusion that wiring the memory, but delaying the pmap_qenter until we actually do the copy is about all we need to do to improve this case. I have another improvement in the pipeline that will actually have more of an impact; right now, we alllocate a VM object + backing store for both directions of the pipe. However, most programs only use one direction of the pipe (AFAIK.) So, I'm going to delay the allocation of the vm object + backing store until an actual write occurs so that we only allocate for space that we will use. This should cut the amount of address space used in half, assuming that most users are unidirectional. Mike "Silby" Silbersack