From owner-freebsd-stable@freebsd.org Thu Jan 4 02:51:23 2018 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 3628BEC0B9A for ; Thu, 4 Jan 2018 02:51:23 +0000 (UTC) (envelope-from shen.elf@gmail.com) Received: from mailman.ysv.freebsd.org (mailman.ysv.freebsd.org [IPv6:2001:1900:2254:206a::50:5]) by mx1.freebsd.org (Postfix) with ESMTP id 1162F753EB for ; Thu, 4 Jan 2018 02:51:23 +0000 (UTC) (envelope-from shen.elf@gmail.com) Received: by mailman.ysv.freebsd.org (Postfix) id 10BEBEC0B99; Thu, 4 Jan 2018 02:51:23 +0000 (UTC) Delivered-To: stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 105BBEC0B98 for ; Thu, 4 Jan 2018 02:51:23 +0000 (UTC) (envelope-from shen.elf@gmail.com) Received: from mail-io0-x233.google.com (mail-io0-x233.google.com [IPv6:2607:f8b0:4001:c06::233]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id D9B1A753E9 for ; Thu, 4 Jan 2018 02:51:22 +0000 (UTC) (envelope-from shen.elf@gmail.com) Received: by mail-io0-x233.google.com with SMTP id x67so714407ioi.9 for ; Wed, 03 Jan 2018 18:51:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:cc; bh=m/VlYpqnWB9KuGZd9dooti/wed9y9S7lMHyKgKl3Tbk=; b=ekNMHDwbFgoveo0KhIADLnGfLwMu2Rl+LidAODMYSpBCdLusjyUVBp/+m34Ja6h4bJ 6l4Y9YfHDNn3STyFU4W4g2eoajO3pqP0aZ8Eqw5Uc6ucshl3A6Jyq8u3Zbm7NyVBEwyr 7K/Jnugt+1ky4azbrQjoaOnaA78Olpx8M/Ssp0g1LYz3jbV9QKWVCKHTE9aS7nJ++UFw DM+79009tqXjf7iS5WCw8rPOn2DsMSEb23qLMBEyI59HFRA8UauRaqMAseZuonH9IXOi +nOgTHwJES+6Y/Fga4nM11rL/wGzw5NoVvg1JL03OeDhudC6hWvsUC/ftMpLvcwmxfv4 F/Yw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:cc; bh=m/VlYpqnWB9KuGZd9dooti/wed9y9S7lMHyKgKl3Tbk=; b=ehuIAW2LeBUs27Rw/NV2G+dhxDt9z80w89l6I2SKcL/9uzv5vVqbG0hRAd/w3Fbnbr HF1n89PBTGYSA94B61OFiackkAWPooRo1tZxZbTTw3kODtIkjoRN0ZXGWHx9a8Z/NS1H jY7e6XW3fuDzPsJ+zQ3KMSr7/uyxD7/ecy2DGr5QNfMzy97mV3ZItqp7eTS9BF2kk0pt Myfe1vzXSNFZvKJ9QyUXL6fJdC2cIjP/7051BfybjsTDSUFyDx10FwaBZO9fL6mYR8No 59l1htZRSzk0I1yPJLCH60goenNEam+eTXzKs/pu7TThjtHJoWXhQPXH3DXXYMTr+oIC 05lw== X-Gm-Message-State: AKGB3mJ0Dx/yHa199o4N6I+GGtEa0XUuianuK0j2IHAuxoQobvPU6CDw t7gRW3vEI2s4qFK3xVLhtyN7x0Uclq7kz8l43s5pXg== X-Google-Smtp-Source: ACJfBosrHfbFK0dteXvCL709YxCFxy5odD2Ak/wOmOK4rxaEeYaaJY6Y9meKulLqeCISD0UV5GROkRhE2dGsmbPO9bQ= X-Received: by 10.107.162.205 with SMTP id l196mr3826148ioe.185.1515034282046; Wed, 03 Jan 2018 18:51:22 -0800 (PST) MIME-Version: 1.0 Received: by 10.2.86.11 with HTTP; Wed, 3 Jan 2018 18:50:41 -0800 (PST) In-Reply-To: <20161017143416.14024482@fabiankeil.de> References: <20161017143416.14024482@fabiankeil.de> From: Yanhui Shen Date: Thu, 4 Jan 2018 10:50:41 +0800 Message-ID: Subject: Re: Poor ZFS ARC metadata hit/miss stats after recent ZFS updates Cc: stable@freebsd.org Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.25 X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.25 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Jan 2018 02:51:23 -0000 This link might be helpful: "Bug 222929 - ZFS ARC stats have wrong count" https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=222929 Best regards, Yanhui Shen 2016-10-17 20:34 GMT+08:00 Fabian Keil : > After rebasing some of my systems from r305866 to r307312 > (plus local patches) I noticed that most of the ARC accesses > are counted as misses now. > > Example: > > [fk@elektrobier2 ~]$ uptime > 2:03PM up 1 day, 18:36, 7 users, load averages: 0.29, 0.36, 0.30 > [fk@elektrobier2 ~]$ zfs-stats -E > > ------------------------------------------------------------------------ > ZFS Subsystem Report Mon Oct 17 14:03:58 2016 > ------------------------------------------------------------------------ > > ARC Efficiency: 3.38m > Cache Hit Ratio: 12.87% 435.23k > Cache Miss Ratio: 87.13% 2.95m > Actual Hit Ratio: 9.55% 323.15k > > Data Demand Efficiency: 6.61% 863.01k > > CACHE HITS BY CACHE LIST: > Most Recently Used: 18.97% 82.54k > Most Frequently Used: 55.28% 240.60k > Most Recently Used Ghost: 8.88% 38.63k > Most Frequently Used Ghost: 24.84% 108.12k > > CACHE HITS BY DATA TYPE: > Demand Data: 13.10% 57.03k > Prefetch Data: 0.00% 0 > Demand Metadata: 32.94% 143.36k > Prefetch Metadata: 53.96% 234.85k > > CACHE MISSES BY DATA TYPE: > Demand Data: 27.35% 805.98k > Prefetch Data: 0.00% 0 > Demand Metadata: 71.21% 2.10m > Prefetch Metadata: 1.44% 42.48k > > ------------------------------------------------------------------------ > > I suspect that this is caused by r307265 ("MFC r305323: MFV r302991: > 6950 ARC should cache compressed data") which removed a > ARCSTAT_CONDSTAT() call but I haven't confirmed this yet. > > The system performance doesn't actually seem to be negatively affected > and repeated metadata accesses that are counted as misses are still served > from memory. On my freshly booted laptop I get: > > fk@t520 /usr/ports $for i in 1 2 3; do \ > /usr/local/etc/munin/plugins/zfs-absolute-arc-hits-and-misses; \ > time git status > /dev/null; \ > done; \ > /usr/local/etc/munin/plugins/zfs-absolute-arc-hits-and-misses; > zfs_arc_hits.value 5758 > zfs_arc_misses.value 275416 > zfs_arc_demand_metadata_hits.value 4331 > zfs_arc_demand_metadata_misses.value 270252 > zfs_arc_demand_data_hits.value 304 > zfs_arc_demand_data_misses.value 3345 > zfs_arc_prefetch_metadata_hits.value 1103 > zfs_arc_prefetch_metadata_misses.value 1489 > zfs_arc_prefetch_data_hits.value 20 > zfs_arc_prefetch_data_misses.value 334 > > real 1m23.398s > user 0m0.974s > sys 0m12.273s > zfs_arc_hits.value 11346 > zfs_arc_misses.value 389748 > zfs_arc_demand_metadata_hits.value 7723 > zfs_arc_demand_metadata_misses.value 381018 > zfs_arc_demand_data_hits.value 400 > zfs_arc_demand_data_misses.value 3412 > zfs_arc_prefetch_metadata_hits.value 3202 > zfs_arc_prefetch_metadata_misses.value 4885 > zfs_arc_prefetch_data_hits.value 21 > zfs_arc_prefetch_data_misses.value 437 > > real 0m1.472s > user 0m0.452s > sys 0m1.820s > zfs_arc_hits.value 11348 > zfs_arc_misses.value 428536 > zfs_arc_demand_metadata_hits.value 7723 > zfs_arc_demand_metadata_misses.value 419782 > zfs_arc_demand_data_hits.value 400 > zfs_arc_demand_data_misses.value 3436 > zfs_arc_prefetch_metadata_hits.value 3204 > zfs_arc_prefetch_metadata_misses.value 4885 > zfs_arc_prefetch_data_hits.value 21 > zfs_arc_prefetch_data_misses.value 437 > > real 0m1.537s > user 0m0.461s > sys 0m1.860s > zfs_arc_hits.value 11352 > zfs_arc_misses.value 467334 > zfs_arc_demand_metadata_hits.value 7723 > zfs_arc_demand_metadata_misses.value 458556 > zfs_arc_demand_data_hits.value 400 > zfs_arc_demand_data_misses.value 3460 > zfs_arc_prefetch_metadata_hits.value 3208 > zfs_arc_prefetch_metadata_misses.value 4885 > zfs_arc_prefetch_data_hits.value 21 > zfs_arc_prefetch_data_misses.value 437 > > Disabling ARC compression through vfs.zfs.compressed_arc_enabled > does not affect the accounting issue. > > Can anybody reproduce this? > > Fabian >