From owner-freebsd-current@FreeBSD.ORG Tue Apr 15 12:19:59 2008 Return-Path: Delivered-To: current@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 6AB2D1065670; Tue, 15 Apr 2008 12:19:59 +0000 (UTC) (envelope-from rwatson@FreeBSD.org) Received: from cyrus.watson.org (cyrus.watson.org [209.31.154.42]) by mx1.freebsd.org (Postfix) with ESMTP id 3DFA68FC12; Tue, 15 Apr 2008 12:19:59 +0000 (UTC) (envelope-from rwatson@FreeBSD.org) Received: from fledge.watson.org (fledge.watson.org [209.31.154.41]) by cyrus.watson.org (Postfix) with ESMTP id B637C46B5A; Tue, 15 Apr 2008 08:19:58 -0400 (EDT) Date: Tue, 15 Apr 2008 13:19:58 +0100 (BST) From: Robert Watson X-X-Sender: robert@fledge.watson.org To: Andrew Reilly In-Reply-To: <20080415034343.GB87024@duncan.reilly.home> Message-ID: <20080415131712.Q29682@fledge.watson.org> References: <48002444.4030505@elischer.org> <20080412191300.E7693@fledge.watson.org> <20080412181601.GA14472@freebsd.org> <20080415034343.GB87024@duncan.reilly.home> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: Roman Divacky , Julian Elischer , FreeBSD Current Subject: Re: stack hogs in kernel X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 15 Apr 2008 12:19:59 -0000 On Tue, 15 Apr 2008, Andrew Reilly wrote: > Why are single-digit kilobytes of memory space interesting, in this context? > Is the concern about L1 data cache footprint, for performance reasons? If > that is the case, the MAXPATHLEN bufffer will only really occupy the amount > of cache actually touched. > > I've long wondered about the seemingly fanatical stack size concern in > kernel space. In other domains (where I have more experience) you can get > good performance benefits from the essentially free memory management and > good cache re-use that comes from putting as much into the stack/call-frame > as possible. In addition to the valid points others have replied with (use of KVA, often not swappable, etc), it's worth noting that as with file descriptors, vnodes, sockets, inodes, etc, kernel thread size directly affects overall kernel scalability, as we require one kernel thread for each user thread in the system. If you have 4000 user threads, you have 4000 (plus change) kernel threads, so avoiding statically allocating large quantities of effectively unused memory can significantly improve memory pressure, especially on relatively small systems. Robert N M Watson Computer Laboratory University of Cambridge