From owner-freebsd-fs@FreeBSD.ORG Tue Jun 26 15:55:23 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id DE6F1106564A; Tue, 26 Jun 2012 15:55:23 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 025808FC08; Tue, 26 Jun 2012 15:55:22 +0000 (UTC) Received: from odyssey.starpoint.kiev.ua (alpha-e.starpoint.kiev.ua [212.40.38.101]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id SAA04051; Tue, 26 Jun 2012 18:55:12 +0300 (EEST) (envelope-from avg@FreeBSD.org) Message-ID: <4FE9DB60.1030905@FreeBSD.org> Date: Tue, 26 Jun 2012 18:55:12 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:13.0) Gecko/20120610 Thunderbird/13.0 MIME-Version: 1.0 To: Mark Felder References: <201206251443.41768.jhb@freebsd.org> <4FE9CC00.9090501@FreeBSD.org> In-Reply-To: X-Enigmail-Version: 1.4.2 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org, John Baldwin Subject: Re: [PATCH] Simple ARC stats in top X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 26 Jun 2012 15:55:24 -0000 on 26/06/2012 18:25 Mark Felder said the following: > On Tue, 26 Jun 2012 09:49:36 -0500, Andriy Gapon wrote: > >> >> Please also reproduce the zfs-stats lines preceding the quoted output. > > zfs2# zfs-stats -A > > ------------------------------------------------------------------------ > ZFS Subsystem Report Tue Jun 26 10:24:40 2012 > ------------------------------------------------------------------------ > > ARC Summary: (HEALTHY) > Memory Throttle Count: 0 > > ARC Misc: > Deleted: 87.09m > Recycle Misses: 50.58m > Mutex Misses: 299.09k > Evict Skips: 5.28m > > ARC Size: 941.88% 5.89 GiB > Target Size: (Adaptive) 100.00% 640.00 MiB > Min Size (Hard Limit): 12.50% 80.00 MiB > Max Size (High Water): 8:1 640.00 MiB Does your system also has L2 ARC? If so, could you please show a value of kstat.zfs.misc.arcstats.l2_hdr_size? Otherwise, it's hard for me to explain the huge difference between Max Size and ARC Size. > ARC Size Breakdown: > Recently Used Cache Size: 0.66% 40.00 MiB > Frequently Used Cache Size: 99.34% 5.85 GiB zfs-stats seems to have a bug where it treats kstat.zfs.misc.arcstats.p as current MRU size whereas it is target MRU size (similarly to how kstat.zfs.misc.arcstats.c is the target cache size). So, the logic that reports the above breakdown is flawed. I believe that the real MFU and MRU sizes are reported below (by arc-sizes.sh script). > ARC Hash Breakdown: > Elements Max: 29.25m > Elements Current: 100.00% 29.25m > Collisions: 89.81m > Chain Max: 131 > Chains: 524.29k > > ------------------------------------------------------------------------ > >> Additionally, please run this script http://people.freebsd.org/~avg/arc-sizes.sh >> > > > zfs2# sh arc-sizes.sh > ARC top-level breakdown: > size: 6319074560 > hdr_size: 10166488 > data_size: 1513472 > other_size: 245576 > > ARC size vs hdr_size + data_size + other_size: > 6319074560 vs 11925536 > > ARC Data breakdown: > mfu_size: 212992 > mru_size: 1251328 > anon_size: 49152 > > Data size vs mfu_size + mru_size + anon_size: > 1513472 vs 1513472 > > mfu breakdown: > data_lsize: 0 > metadata_lsize: 0 > > other (overhead? ghost entries?): 212992 > > mru breakdown: > data_lsize: 0 > metadata_lsize: 49152 > > other (overhead? ghost entries?): 1202176 > > anon breakdown: > data_lsize: 0 > metadata_lsize: 0 > > other (overhead? ghost entries?): 49152 Thank you. -- Andriy Gapon