From owner-freebsd-fs@FreeBSD.ORG Tue Jun 21 22:15:54 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D1600106572B for ; Tue, 21 Jun 2011 22:15:54 +0000 (UTC) (envelope-from artemb@gmail.com) Received: from mail-yx0-f182.google.com (mail-yx0-f182.google.com [209.85.213.182]) by mx1.freebsd.org (Postfix) with ESMTP id 8D08A8FC1A for ; Tue, 21 Jun 2011 22:15:54 +0000 (UTC) Received: by yxl31 with SMTP id 31so134327yxl.13 for ; Tue, 21 Jun 2011 15:15:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=oEwDToavuo0HgkJYBXhUdG4WUBULozwlzPW0sZS0zBg=; b=rB0wNfKs9/ZiIadsbZtiqjHAjhk48XRuXi1tNNiSxQyBe5Hjp9mrKn6HWZd6m27cRL QFNF3BZkPgk6x6AN1oyUbYBiW5I2SyO484htVA+yZ/AKPJfS5bWPJrA259N47bb2Phct fbLr6QkW+20jpbEMv3r5ISGzhxhCRy2gU9gYE= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; b=ewgaR+//4bkz0ruoKR9Jq5e9S0LyzwX5no+gB59uHJTTj+km33slSz5LuC6UvxqqvA h5mMrIEZI3+Qlfs3re5dozZ7LKSD6EhGFx2xF89EEWv9W8cdrWrQ3OAebx6VqM5wN3Xf iiJUG/NHZsRN1pKr/QzNz5I8WQDoPD6BwrV1U= MIME-Version: 1.0 Received: by 10.236.180.98 with SMTP id i62mr10296548yhm.403.1308694553546; Tue, 21 Jun 2011 15:15:53 -0700 (PDT) Sender: artemb@gmail.com Received: by 10.236.61.73 with HTTP; Tue, 21 Jun 2011 15:15:53 -0700 (PDT) In-Reply-To: References: Date: Tue, 21 Jun 2011 15:15:53 -0700 X-Google-Sender-Auth: nx082PDTpXsRHEAmpyt-zILCVQc Message-ID: From: Artem Belevich To: Wiktor Niesiobedzki Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org Subject: Re: ZFS L2ARC hit ratio X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 21 Jun 2011 22:15:54 -0000 On Tue, Jun 21, 2011 at 12:59 PM, Wiktor Niesiobedzki wrote: > Hi, > > I've recently migrated my 8.2 box to recent stable: > FreeBSD kadlubek.vink.pl 8.2-STABLE FreeBSD 8.2-STABLE #22: Tue Jun =A07 > 03:43:29 CEST 2011 =A0 =A0 root@kadlubek:/usr/obj/usr/src/sys/KADLUB =A0i= 386 > > And upgraded my ZFS/ZPOOL to newest versions. Though through my > monitoring I've noticed some declination in L2ARC hit ratio (server is > not busy, so it doesn't look that suspicious). I've made some tests > today and I guess, that there might be some problem: > > I've did the following on cold cache: > sysctl kstat.zfs.misc.arcstats.hits kstat.zfs.misc.arcstats.l2_hits > kstat.zfs.misc.arcstats.misses kstat.zfs.misc.arcstats.l2_misses && > cat 4gb_file>/dev/null && sysctl kstat.zfs.misc.arcstats.hits > kstat.zfs.misc.arcstats.l2_hits kstat.zfs.misc.arcstats.misses > kstat.zfs.misc.arcstats.l2_misses > > And after computing the differences I've got: > kstat.zfs.misc.arcstats.hits =A0 =A01213775 > kstat.zfs.misc.arcstats.l2_hits 21 > kstat.zfs.misc.arcstats.misses =A037364 > kstat.zfs.misc.arcstats.l2_misses =A0 =A0 =A0 37343 > > That's pretty normal. After that, I've noticed the growth in L2ARC > usage by 4gb, but, when I do the same operation again, the results are > worrying: > kstat.zfs.misc.arcstats.hits =A0 =A01188662 > kstat.zfs.misc.arcstats.l2_hits 305 > kstat.zfs.misc.arcstats.misses =A036933 > kstat.zfs.misc.arcstats.l2_misses =A0 =A0 =A0 36628 > > +/- the same. > > I've did some gstating during these tests, and I've noticed around 2 > reads per second from my cache device accounting for about 32kb per > second. Not that much. > > My first guess, is that for some reason, we claim that L2ARC record is > outdated and thus not using it at all. > > Any clues, why L2ARC isn't kicking in this situation at all? I notice > some substantial (like 5-10%) hits from L2ARC during the cronjobs > though, but this simple scenario is just failing... > > For the record below are some other details: > %zfs get all tank > NAME =A0PROPERTY =A0 =A0 =A0 =A0 =A0 =A0 =A0VALUE =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0SOURCE > tank =A0type =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0filesystem =A0 =A0 =A0 = =A0 =A0 =A0 - > tank =A0creation =A0 =A0 =A0 =A0 =A0 =A0 =A0Sat Dec =A05 =A03:37 2009 =A0= - > tank =A0used =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0572G =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 - > tank =A0available =A0 =A0 =A0 =A0 =A0 =A0 343G =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 - > tank =A0referenced =A0 =A0 =A0 =A0 =A0 =A0441G =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 - > tank =A0compressratio =A0 =A0 =A0 =A0 1.00x =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0- > tank =A0mounted =A0 =A0 =A0 =A0 =A0 =A0 =A0 yes =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0- > tank =A0quota =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 none =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 default > tank =A0reservation =A0 =A0 =A0 =A0 =A0 none =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 default > tank =A0recordsize =A0 =A0 =A0 =A0 =A0 =A0128K =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 default > tank =A0mountpoint =A0 =A0 =A0 =A0 =A0 =A0/tank =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0default > tank =A0sharenfs =A0 =A0 =A0 =A0 =A0 =A0 =A0off =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0default > tank =A0checksum =A0 =A0 =A0 =A0 =A0 =A0 =A0on =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 default > tank =A0compression =A0 =A0 =A0 =A0 =A0 off =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0default > tank =A0atime =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 off =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 =A0local > tank =A0devices =A0 =A0 =A0 =A0 =A0 =A0 =A0 on =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 default > tank =A0exec =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0on =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0 default > tank =A0setuid =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0on =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 default > tank =A0readonly =A0 =A0 =A0 =A0 =A0 =A0 =A0off =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0default > tank =A0jailed =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0off =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 =A0default > tank =A0snapdir =A0 =A0 =A0 =A0 =A0 =A0 =A0 hidden =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 default > tank =A0aclinherit =A0 =A0 =A0 =A0 =A0 =A0restricted =A0 =A0 =A0 =A0 =A0 = =A0 default > tank =A0canmount =A0 =A0 =A0 =A0 =A0 =A0 =A0on =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 default > tank =A0xattr =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 off =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 =A0temporary > tank =A0copies =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A01 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0default > tank =A0version =A0 =A0 =A0 =A0 =A0 =A0 =A0 5 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 =A0- > tank =A0utf8only =A0 =A0 =A0 =A0 =A0 =A0 =A0off =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0- > tank =A0normalization =A0 =A0 =A0 =A0 none =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 - > tank =A0casesensitivity =A0 =A0 =A0 sensitive =A0 =A0 =A0 =A0 =A0 =A0 =A0= - > tank =A0vscan =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 off =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 =A0default > tank =A0nbmand =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0off =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 =A0default > tank =A0sharesmb =A0 =A0 =A0 =A0 =A0 =A0 =A0off =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0default > tank =A0refquota =A0 =A0 =A0 =A0 =A0 =A0 =A0none =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 default > tank =A0refreservation =A0 =A0 =A0 =A0none =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 default > tank =A0primarycache =A0 =A0 =A0 =A0 =A0all =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0default > tank =A0secondarycache =A0 =A0 =A0 =A0all =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0default > tank =A0usedbysnapshots =A0 =A0 =A0 0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0- > tank =A0usedbydataset =A0 =A0 =A0 =A0 441G =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 - > tank =A0usedbychildren =A0 =A0 =A0 =A0131G =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 - > tank =A0usedbyrefreservation =A00 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0- > tank =A0logbias =A0 =A0 =A0 =A0 =A0 =A0 =A0 latency =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0default > tank =A0dedup =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 off =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 =A0default > tank =A0mlslabel =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 - > tank =A0sync =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0standard =A0 =A0 =A0 =A0 = =A0 =A0 =A0 default > > %zpool status tank > =A0pool: tank > =A0state: ONLINE > =A0scan: scrub repaired 0 in 7h23m with 0 errors on Wed Jun 15 07:53:29 2= 011 > config: > > =A0NAME =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0STATE =A0 =A0 READ WRITE CKSUM > =A0tank =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 0 > =A0 raidz1-0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 0 > =A0 =A0 ad6.eli =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 0 > =A0 =A0 ad8.eli =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 0 > =A0 =A0 ad10.eli =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 0 > =A0cache > =A0 gptid/7644bfda-e141-11de-951e-004063f2d074 =A0ONLINE =A0 =A0 =A0 0 = =A0 =A0 0 =A0 =A0 0 > > errors: No known data errors > > > Cheers, > > Wiktor Niesiobedzki > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > L2ARC is filled with items evicted from ARC. The catch is that L2ARC writes are intentionally throttled. When L2ARC is empty writes happen at a higher rate, but it's still intentionally low so that read-optimized cache device does not wear out too soon. The bottom line is that not all the data spilled out of ARC ends up in L2ARC on the first try. Re-run your experiment again and you would probably see some improvement in L2ARC hit rates. You can use following sysctls that control L2ARC write speed: vfs.zfs.l2arc_write_boost: 8388608 vfs.zfs.l2arc_write_max: 8388608 Word of caution -- before you tweak this, do check total amount of writes your SSD can handle and how long it would take for L2ARC writes to write that much. I've recently discovered that on one of my boxes 160GB X-25M (G2) ended up at it's official limit in about three months. --Artem