From owner-freebsd-current Sun Jan 26 14:51:20 1997 Return-Path: Received: (from root@localhost) by freefall.freebsd.org (8.8.5/8.8.5) id OAA29949 for current-outgoing; Sun, 26 Jan 1997 14:51:20 -0800 (PST) Received: from godzilla.zeta.org.au (godzilla.zeta.org.au [203.2.228.19]) by freefall.freebsd.org (8.8.5/8.8.5) with ESMTP id OAA29941 for ; Sun, 26 Jan 1997 14:51:09 -0800 (PST) Received: (from bde@localhost) by godzilla.zeta.org.au (8.8.3/8.6.9) id JAA11690; Mon, 27 Jan 1997 09:47:26 +1100 Date: Mon, 27 Jan 1997 09:47:26 +1100 From: Bruce Evans Message-Id: <199701262247.JAA11690@godzilla.zeta.org.au> To: bde@zeta.org.au, dg@root.com Subject: Re: exec bug Cc: current@FreeBSD.ORG, swallace@ece.uci.edu Sender: owner-current@FreeBSD.ORG X-Loop: FreeBSD.org Precedence: bulk >>The image activators should read the part that they need. > > Uh, no. Not unless you want our exec time to be about 5 times higher than >linux. The process of mapping the image header is a major portion of the >overhead of exec. No, my user space times show that a user read() takes about the same time as kernel mapping (50 usec = less than 5% of the time for a static-library fork-exec-exit, and less than 0.6% of the time for a shared-library fork-exec-exit. read() should be slightly faster in the kernel. >>Or just allocate a buffer for it and access the buffer directly. This > Hmmm. From an architectural perspective, this sounds really kludgy. I >especially don't like the fact that buffers aren't of a constant size. In >the end, I think doing the buffer thing would have much more overhead than >what I'm doing now. My user space times show that it wouldn't have much more initial overhead. Lookup of mapped in-core buffers is very efficient - hashing works well. If there is a slowdown later due to the file being accessed using a different method, then the unified vm must not be very unified. I don't think it is that bad :-). Bruce