From owner-freebsd-hackers Mon Sep 23 16:48:58 1996 Return-Path: owner-hackers Received: (from root@localhost) by freefall.freebsd.org (8.7.5/8.7.3) id QAA07729 for hackers-outgoing; Mon, 23 Sep 1996 16:48:58 -0700 (PDT) Received: from melb.werple.net.au (melb.werple.net.au [203.9.190.18]) by freefall.freebsd.org (8.7.5/8.7.3) with ESMTP id QAA07687 for ; Mon, 23 Sep 1996 16:48:53 -0700 (PDT) Received: (from uucp@localhost) by melb.werple.net.au (8.7.6/8.7.3/2) with UUCP id IAA12162; Tue, 24 Sep 1996 08:58:44 +1000 (EST) Received: (from jb@localhost) by freebsd3.cimlogic.com.au (8.7.5/8.7.3) id IAA18261; Tue, 24 Sep 1996 08:00:44 +1000 (EST) From: John Birrell Message-Id: <199609232200.IAA18261@freebsd3.cimlogic.com.au> Subject: Re: libc_r bug To: michaelh@cet.co.jp (Michael Hancock) Date: Tue, 24 Sep 1996 08:00:44 +1000 (EST) Cc: julian@whistle.com, hsu@freefall.freebsd.org, jb@cimlogic.com.au, hackers@FreeBSD.org In-Reply-To: from Michael Hancock at "Sep 24, 96 05:06:03 am" X-Mailer: ELM [version 2.4ME+ PL22 (25)] MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: owner-hackers@FreeBSD.org X-Loop: FreeBSD.org Precedence: bulk > > > I prefer on demand too. > > but what's the overhead on every file operation? The same as for any blocking file op. That is, a call to _thread_fd_lock() which calls _thread_fd_table_init(), then a call to _thread_fd_unlock(). The first call to _thread_fd_table_init() mallocs memory for the fd. Thereafter, a non-NULL pointer is assumed to point to valid memory. The lock/unlock operation is performed with signals blocked to ensure that the operation is atomic wrt the process, so you need to add the overhead of doing this twice. > > Pre-allocating kind of implies fixed, but I guess it doesn't have to. For > > Maybe you can pre-allocate a chunk and dynamically allocate more chunks > based on high water marks. > > Or maybe just implement a simple algorithm first that works correctly and > performance it would be better to pre-allocate at the expense of space. > optimize later when you understand more aspects of the problem. I think that it is worth spending time on making the scheduling operation more efficient before making this sort of performance improvement. Kernel threads would take this performance hit away. > Mike Hancock > > Regards, -- John Birrell CIMlogic Pty Ltd jb@cimlogic.com.au 119 Cecil Street Ph +61 3 9690 6900 South Melbourne Vic 3205 Fax +61 3 9690 6650 Australia Mob +61 18 353 137