From owner-svn-src-all@FreeBSD.ORG Tue Mar 15 18:38:37 2011 Return-Path: Delivered-To: svn-src-all@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 25411106567A; Tue, 15 Mar 2011 18:38:37 +0000 (UTC) (envelope-from brde@optusnet.com.au) Received: from mail02.syd.optusnet.com.au (mail02.syd.optusnet.com.au [211.29.132.183]) by mx1.freebsd.org (Postfix) with ESMTP id B3B148FC16; Tue, 15 Mar 2011 18:38:36 +0000 (UTC) Received: from c122-107-125-80.carlnfd1.nsw.optusnet.com.au (c122-107-125-80.carlnfd1.nsw.optusnet.com.au [122.107.125.80]) by mail02.syd.optusnet.com.au (8.13.1/8.13.1) with ESMTP id p2FIcWW2014358 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Wed, 16 Mar 2011 05:38:35 +1100 Date: Wed, 16 Mar 2011 05:38:32 +1100 (EST) From: Bruce Evans X-X-Sender: bde@besplex.bde.org To: Jung-uk Kim In-Reply-To: <201103151714.p2FHEQdF049456@svn.freebsd.org> Message-ID: <20110316051750.M2847@besplex.bde.org> References: <201103151714.p2FHEQdF049456@svn.freebsd.org> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: svn-src-head@FreeBSD.org, svn-src-all@FreeBSD.org, src-committers@FreeBSD.org Subject: Re: svn commit: r219672 - in head: share/man/man9 sys/i386/include X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 15 Mar 2011 18:38:37 -0000 On Tue, 15 Mar 2011, Jung-uk Kim wrote: > Log: > Unconditionally use binuptime(9) for get_cyclecount(9) on i386. Since this > function is almost exclusively used for random harvesting, there is no need > for micro-optimization. Adjust the manual page accordingly. That's what I said when it was being committed, but it isn't clear that there is _no_ need for micro-optimization in random harvesting. IIRC, random harvesting originally was too active so it benefited more from micro-optimizations. The timecounter is fast enough if the timecounter hardware is the TSC, since TSC reads used to take 12 cycles on Athlons and now take 40+, and timecounter software overhead only adds about 30 cycles to this. But now the timecounter harware is even more rarely the TSC, and most timecounter hardware is very slow -- about 3000 cycles for APCI-"fast" and 9000 for ACPI and 15000 for i8254 at 3 GHz. > Modified: head/sys/i386/include/cpu.h > ============================================================================== > --- head/sys/i386/include/cpu.h Tue Mar 15 16:50:17 2011 (r219671) > +++ head/sys/i386/include/cpu.h Tue Mar 15 17:14:26 2011 (r219672) > @@ -70,15 +70,10 @@ void swi_vm(void *); > static __inline uint64_t > get_cyclecount(void) > { > -#if defined(I486_CPU) || defined(KLD_MODULE) > struct bintime bt; > > - if (!tsc_present) { > - binuptime(&bt); > - return ((uint64_t)bt.sec << 56 | bt.frac >> 8); > - } > -#endif > - return (rdtsc()); > + binuptime(&bt); > + return ((uint64_t)bt.sec << 56 | bt.frac >> 8); > } You should pessimize all arches to use binuptime() to get enough test coverage to see if anyone notices. Then get_cyclecount() can be removed. The correct function to use for fast and possibly-wrong times is clock_binuptime(CLOCK_FASTEST, tsp), where CLOCK_FASTEST maps to CLOCK_TSC on x86 if there is a TSC and nothing faster. Bruce