From owner-freebsd-fs@FreeBSD.ORG Tue Jun 21 20:22:32 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 15ED0106572C for ; Tue, 21 Jun 2011 20:22:32 +0000 (UTC) (envelope-from bsd@vink.pl) Received: from mail-bw0-f54.google.com (mail-bw0-f54.google.com [209.85.214.54]) by mx1.freebsd.org (Postfix) with ESMTP id A15EE8FC19 for ; Tue, 21 Jun 2011 20:22:31 +0000 (UTC) Received: by bwz12 with SMTP id 12so340850bwz.13 for ; Tue, 21 Jun 2011 13:22:30 -0700 (PDT) Received: by 10.204.141.205 with SMTP id n13mr2894756bku.198.1308686353868; Tue, 21 Jun 2011 12:59:13 -0700 (PDT) Received: from mail-bw0-f54.google.com (mail-bw0-f54.google.com [209.85.214.54]) by mx.google.com with ESMTPS id k16sm5631690bks.1.2011.06.21.12.59.13 (version=SSLv3 cipher=OTHER); Tue, 21 Jun 2011 12:59:13 -0700 (PDT) Received: by bwz12 with SMTP id 12so319272bwz.13 for ; Tue, 21 Jun 2011 12:59:13 -0700 (PDT) MIME-Version: 1.0 Received: by 10.204.83.73 with SMTP id e9mr2052305bkl.118.1308686353160; Tue, 21 Jun 2011 12:59:13 -0700 (PDT) Received: by 10.204.79.83 with HTTP; Tue, 21 Jun 2011 12:59:12 -0700 (PDT) Date: Tue, 21 Jun 2011 21:59:12 +0200 Message-ID: From: Wiktor Niesiobedzki To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 Subject: ZFS L2ARC hit ratio X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 21 Jun 2011 20:22:32 -0000 Hi, I've recently migrated my 8.2 box to recent stable: FreeBSD kadlubek.vink.pl 8.2-STABLE FreeBSD 8.2-STABLE #22: Tue Jun 7 03:43:29 CEST 2011 root@kadlubek:/usr/obj/usr/src/sys/KADLUB i386 And upgraded my ZFS/ZPOOL to newest versions. Though through my monitoring I've noticed some declination in L2ARC hit ratio (server is not busy, so it doesn't look that suspicious). I've made some tests today and I guess, that there might be some problem: I've did the following on cold cache: sysctl kstat.zfs.misc.arcstats.hits kstat.zfs.misc.arcstats.l2_hits kstat.zfs.misc.arcstats.misses kstat.zfs.misc.arcstats.l2_misses && cat 4gb_file>/dev/null && sysctl kstat.zfs.misc.arcstats.hits kstat.zfs.misc.arcstats.l2_hits kstat.zfs.misc.arcstats.misses kstat.zfs.misc.arcstats.l2_misses And after computing the differences I've got: kstat.zfs.misc.arcstats.hits 1213775 kstat.zfs.misc.arcstats.l2_hits 21 kstat.zfs.misc.arcstats.misses 37364 kstat.zfs.misc.arcstats.l2_misses 37343 That's pretty normal. After that, I've noticed the growth in L2ARC usage by 4gb, but, when I do the same operation again, the results are worrying: kstat.zfs.misc.arcstats.hits 1188662 kstat.zfs.misc.arcstats.l2_hits 305 kstat.zfs.misc.arcstats.misses 36933 kstat.zfs.misc.arcstats.l2_misses 36628 +/- the same. I've did some gstating during these tests, and I've noticed around 2 reads per second from my cache device accounting for about 32kb per second. Not that much. My first guess, is that for some reason, we claim that L2ARC record is outdated and thus not using it at all. Any clues, why L2ARC isn't kicking in this situation at all? I notice some substantial (like 5-10%) hits from L2ARC during the cronjobs though, but this simple scenario is just failing... For the record below are some other details: %zfs get all tank NAME PROPERTY VALUE SOURCE tank type filesystem - tank creation Sat Dec 5 3:37 2009 - tank used 572G - tank available 343G - tank referenced 441G - tank compressratio 1.00x - tank mounted yes - tank quota none default tank reservation none default tank recordsize 128K default tank mountpoint /tank default tank sharenfs off default tank checksum on default tank compression off default tank atime off local tank devices on default tank exec on default tank setuid on default tank readonly off default tank jailed off default tank snapdir hidden default tank aclinherit restricted default tank canmount on default tank xattr off temporary tank copies 1 default tank version 5 - tank utf8only off - tank normalization none - tank casesensitivity sensitive - tank vscan off default tank nbmand off default tank sharesmb off default tank refquota none default tank refreservation none default tank primarycache all default tank secondarycache all default tank usedbysnapshots 0 - tank usedbydataset 441G - tank usedbychildren 131G - tank usedbyrefreservation 0 - tank logbias latency default tank dedup off default tank mlslabel - tank sync standard default %zpool status tank pool: tank state: ONLINE scan: scrub repaired 0 in 7h23m with 0 errors on Wed Jun 15 07:53:29 2011 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 ad6.eli ONLINE 0 0 0 ad8.eli ONLINE 0 0 0 ad10.eli ONLINE 0 0 0 cache gptid/7644bfda-e141-11de-951e-004063f2d074 ONLINE 0 0 0 errors: No known data errors Cheers, Wiktor Niesiobedzki