From owner-freebsd-fs@FreeBSD.ORG Wed Sep 29 18:10:49 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id EBA5B106564A for ; Wed, 29 Sep 2010 18:10:49 +0000 (UTC) (envelope-from avg@freebsd.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 304588FC17 for ; Wed, 29 Sep 2010 18:10:48 +0000 (UTC) Received: from odyssey.starpoint.kiev.ua (alpha-e.starpoint.kiev.ua [212.40.38.101]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id VAA02414; Wed, 29 Sep 2010 21:10:45 +0300 (EEST) (envelope-from avg@freebsd.org) Message-ID: <4CA38124.60902@freebsd.org> Date: Wed, 29 Sep 2010 21:10:44 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.9) Gecko/20100920 Lightning/1.0b2 Thunderbird/3.1.4 MIME-Version: 1.0 To: Steven Hartland References: <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk><4C8D087B.5040404@freebsd.org><03537796FAB54E02959E2D64FC83004F@multiplay.co.uk><4C8D280F.3040803@freebsd.org><3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk><4C8E4212.30000@freebsd.org> <4C90B4C8.90203@freebsd.org> <6DFACB27CA8A4A22898BC81E55C4FD36@multiplay.co.uk> <4C90D3A1.7030008@freebsd.org> <0B1A90A08DFE4ADA9540F9F3846FDF38@multiplay.co.uk> <4C90EDB8.3040709@freebsd.org> <3F29E8CED7B24805B2D93F62A4EC9559@multiplay.co.uk> <4C9126FB.2020707@freebsd.org> <1E0B9C1145784776A773B99FC1139CD5@multiplay.co.uk> <4C987F90.6000006@freebsd.org> <4C98803F.7000901@freebsd.org> <879BF5981D1B4C7290BDF18286BA1EEC@multiplay.co.uk> <4C989201.2 0506@freebsd.org> <4C98A2BA.1080004@freebsd.org> <4C98BFCE.2020202@freebsd.org> In-Reply-To: <4C98BFCE.2020202@freebsd.org> X-Enigmail-Version: 1.1.2 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Sep 2010 18:10:50 -0000 [ping] on 21/09/2010 17:23 Andriy Gapon said the following: > on 21/09/2010 16:53 Steven Hartland said the following: >> That's what I thought you where saying. Is there a test you would suggest to confirm >> either way more accurately? > > Perhaps you can try the test scenario that you described and monitor parameters > suggested by Wiktor in this thread. > > That is, have two large files and set arc max size such that one of them can fit > in ARC readily, but two of them won't fit by a large margin. Make sure that > remaining RAM is large enough to hold both files in page cache. > > 1. sendfile one file, then the other > 2. record kstat.zfs.misc.arcstats values > 3. sendfile the first file again > 4. record kstat.zfs.misc.arcstats values > > If the first file data was re-used from page cache, then you won't see much > changes in kstat.zfs.misc.arcstats. If it had to be taken from ARC or from disk, > then either ARC hits or ARC misses will grow noticeably. > > Make sure to not have any parallel activity that could affect kstat.zfs.misc.arcstats. > > I think kstat.zfs.misc.arcstats.hits and kstat.zfs.misc.arcstats.misses are two > primary indicators in this test. > -- Andriy Gapon