From owner-freebsd-hackers Wed May 15 23:14:06 1996 Return-Path: owner-hackers Received: (from root@localhost) by freefall.freebsd.org (8.7.3/8.7.3) id XAA29379 for hackers-outgoing; Wed, 15 May 1996 23:14:06 -0700 (PDT) Received: from dyson.iquest.net (dyson.iquest.net [198.70.144.127]) by freefall.freebsd.org (8.7.3/8.7.3) with ESMTP id XAA29374 for ; Wed, 15 May 1996 23:14:02 -0700 (PDT) Received: (from root@localhost) by dyson.iquest.net (8.7.5/8.6.9) id BAA07537; Thu, 16 May 1996 01:13:11 -0500 (EST) From: "John S. Dyson" Message-Id: <199605160613.BAA07537@dyson.iquest.net> Subject: Re: EDO & Memory latency To: babkin@hq.icb.chel.su (Serge A. Babkin) Date: Thu, 16 May 1996 01:13:11 -0500 (EST) Cc: hackers@freebsd.org In-Reply-To: <199605160309.JAA29241@hq.icb.chel.su> from "Serge A. Babkin" at May 16, 96 09:09:12 am Reply-To: dyson@freebsd.org X-Mailer: ELM [version 2.4 PL24 ME8] MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: owner-hackers@freebsd.org X-Loop: FreeBSD.org Precedence: bulk > > I have just tried lmbench and the numbers it gives are looking > slightly strange for me. It shows memory latency upto 500ns while > I have 60-ns EDO memory in a Pentium/75 box. Okay, its external > clock is 25MHz, this gives 40ns, one wait state, it gives another 40ns, > it gives 80ns, but why the overhead is over 400ns ? > > Can it go from some VM subsystem activity ? I have 16M of RAM in my box > and I runned lmbench with 8M maximal buffer size. The latency grows > with the size of buffer. Is it possible that when > the size of buffer grows the VM subsystem moves the non-recently used > pages to some pool and when they are accessed again it gets the VM fault > and remaps them back to that process? > There are several things going on. One is that there is propagation time through to the main memory, it is much worse than the memory cycle time. Of course, that does not account for all of the 400 nsecs that you are seeing. BTW, the R4400 boxes that I used to work on reported about 2usecs for large strides!!!! R3000's actually overflowed the counter :-). You likely are seeing TLB overhead intrinsic to the processor. Some processors don't have microcode TLB management, and you'll see worse numbers, because the TLB needs to be handled in normal machine/assembly code. (Of course, on those processors, you can tune the TLB management more freely.) John dyson@freebsd.org