From owner-freebsd-fs@freebsd.org Sat Aug 27 16:15:22 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 4FCE3B76E04 for ; Sat, 27 Aug 2016 16:15:22 +0000 (UTC) (envelope-from ben.rubson@gmail.com) Received: from mail-wm0-x22e.google.com (mail-wm0-x22e.google.com [IPv6:2a00:1450:400c:c09::22e]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id D855D636 for ; Sat, 27 Aug 2016 16:15:21 +0000 (UTC) (envelope-from ben.rubson@gmail.com) Received: by mail-wm0-x22e.google.com with SMTP id i5so32301302wmg.0 for ; Sat, 27 Aug 2016 09:15:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:subject:from:in-reply-to:date :content-transfer-encoding:message-id:references:to; bh=53D2hOiQwnhOFAzDR7gRa3I8GDHsCCCfM05QaLShoc0=; b=fKzXH5gA9UTfw6rQLrumlqQNSOQMRE3yfTOlgLPE8uBNdkOhiIF05TNS52EfRfV/Y2 89VQZmaqHQKlbySOhQfec6VxPwnU7JaPn0eR9XizKgKbqtsnu52tdDMQCpm50vmzts43 VLJZOcWpkKyPgTH88eM5YQp9QWblmuS8Eu1LaQRGSf29e9TioDowS6WZyHTZSuWMo7qs vIJW71M6OGrCQ0kQNmNKSpfJRq0YH3gjD9786ZqdhgAEJU5H51Mf7xZnIUDgRdf2zC0l qJIIDbVO4tbIGpkIK+qBRqFg9WDS7nZ58G+PKI3vqDZdHSVroYdw1CO+SsCN+r173nv/ lqeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:subject:from:in-reply-to:date :content-transfer-encoding:message-id:references:to; bh=53D2hOiQwnhOFAzDR7gRa3I8GDHsCCCfM05QaLShoc0=; b=WoiWLijTNEFwq6Zv+gSYB6sk7xFVjphZ8L19QrlSsiVB+1BnMTZBz3ggDjiQGq8GM/ O4TlfBAAwIoM/i8IQzj8ye60f0kzobek24UvlVTwwR1vRg2UZLKNDAOnXUg6uKoh4eU6 tm1UcOGdFwXX25H4fm1Mgrw70doLEcdPfGGzOQWDp3DCW9Xh+EmoE3gx00jq9n7yvl8P HYu9tiaWXh950DlnzIvuIbX5XkTiiinLyUMUk9FJv2Tk4wCos9zk5zslxprPpTWsyUqM C5+UPbLK3DbhF723q0J5OWARPTXfzyFzdYm9bAm2kGeHWLCm63xnhTxmTVmifCai2Lnq C5ZQ== X-Gm-Message-State: AE9vXwPh5uyJjm44QpKKTFvk+IWh0KTadP/sCXHwCcEnEXHjlaio3iofWR8IcGsBEdZRiQ== X-Received: by 10.28.197.133 with SMTP id v127mr3702429wmf.26.1472314519851; Sat, 27 Aug 2016 09:15:19 -0700 (PDT) Received: from macbook-air-de-benjamin-1.home (ANice-651-1-19-245.w86-203.abo.wanadoo.fr. [86.203.34.245]) by smtp.gmail.com with ESMTPSA id n2sm25327205wjd.1.2016.08.27.09.15.18 for (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sat, 27 Aug 2016 09:15:19 -0700 (PDT) Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Mac OS X Mail 9.3 \(3124\)) Subject: Re: [ZFS] ARC accounting bug ? From: Ben RUBSON In-Reply-To: Date: Sat, 27 Aug 2016 18:15:18 +0200 Content-Transfer-Encoding: quoted-printable Message-Id: <71DED907-10BE-44C2-982B-12974152895D@gmail.com> References: To: FreeBSD FS X-Mailer: Apple Mail (2.3124) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 27 Aug 2016 16:15:22 -0000 > On 27 Aug 2016, at 07:22, Shane Ambler wrote: >=20 > On 26/08/2016 19:09, Ben RUBSON wrote: >> Hello, >>=20 >> Before opening a bug report, I would like to know whether what I see >> is "normal" or not, and why. >>=20 >> ### Test : >>=20 >> # zfs import mypool >> # zfs set primarycache=3Dmetadata mypool >=20 > Well that sets the primarycache for the pool and all subsets that > inherit the property. Do any sub filesystems have local settings? No. > zfs get -r primary cache mypool >=20 > And mypool is the only zpool on the machine? Yes. >> # while [ 1 ]; do find /mypool/ >/dev/null; done >>=20 >> # zfs-mon -a >>=20 >> ZFS real-time cache activity monitor >> Seconds elapsed: 162 >>=20 >> Cache hits and misses: >> 1s 10s 60s tot >> ARC hits: 79228 76030 73865 74953 >> ARC misses: 22510 22184 21647 21955 >> ARC demand data hits: 0 0 0 0 >> ARC demand data misses: 4 7 8 7 >> ARC demand metadata hits: 79230 76030 73865 74953 >> ARC demand metadata misses: 22506 22177 21639 21948 >> ZFETCH hits: 47 29 32 31 >> ZFETCH misses:101669 98138 95433 96830 >>=20 >> Cache efficiency percentage: >> 10s 60s tot >> ARC: 77.41 77.34 77.34 >> ARC demand data: 0.00 0.00 0.00 >> ARC demand metadata: 77.42 77.34 77.35 >> ZFETCH: 0.03 0.03 0.03 >>=20 >> ### Question : >>=20 >> I don't understand why I have so many ARC misses. There is no other >> activity on the server (as soon as I stop the find loop, no more ARC >> hits). As soon as the first find loop is done, there is no more disk >> activity (according to zpool iostat -v 1), no read/write operations >> on mypool. >> So I'm pretty sure all metadata comes from ARC. >> So why are there so many ARC misses ? >=20 > Running zfs-mon on my desktop, I seem to get similar results. Thank you for having tested it Shane. > What I am seeing leads me to think that not all metadata is cached, > maybe filename isn't cached, which can be a large string. >=20 > while [ 1 ]; do find /usr/ports > /dev/null; done >=20 > will list the path to every file and I see about 2 hits to a miss, yet >=20 > while [ 1 ]; do ls -lR /usr/ports > /dev/null; done >=20 > lists every filename as well as it's size, mod date, owner, = permissions > and it sits closer to 4 hits to every miss. >=20 > And if the system disk cache contains the filenames that zfs isn't = caching we won't need disk access to get the zfs misses. Playing with these commands : # dtrace -n 'sdt:zfs::arc-hit {@[execname, stack()] =3D count();}' # dtrace -n 'sdt:zfs::arc-miss {@[execname, stack()] =3D count();}' We can see that these are readdir calls which produce arc-misses, and = that readdir calls also produce arc-hits. It would be interesting to know why some lead to hits, and some lead to = misses. (note that ls -lR / rsync commands produces exactly the same dtrace = results/numbers as find command) Ben