Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 27 Aug 2016 14:52:32 +0930
From:      Shane Ambler <FreeBSD@ShaneWare.Biz>
To:        Ben RUBSON <ben.rubson@gmail.com>, FreeBSD FS <freebsd-fs@freebsd.org>
Subject:   Re: [ZFS] ARC accounting bug ?
Message-ID:  <a89defa1-4335-70b2-25d5-ca43626c844d@ShaneWare.Biz>
In-Reply-To: <C2642B73-83F2-4A1C-88BE-322F376861FF@gmail.com>
References:  <C2642B73-83F2-4A1C-88BE-322F376861FF@gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On 26/08/2016 19:09, Ben RUBSON wrote:
> Hello,
>
> Before opening a bug report, I would like to know whether what I see
is "normal" or not, and why.

> ### Test :
>
> # zfs import mypool
> # zfs set primarycache=metadata mypool

Well that sets the primarycache for the pool and all subsets that
inherit the property. Do any sub filesystems have local settings?

zfs get -r primarycache mypool

And mypool is the only zpool on the machine?

> # while [ 1 ]; do find /mypool/ >/dev/null; done

> # zfs-mon -a
>
> ZFS real-time cache activity monitor
> Seconds elapsed: 162
>
> Cache hits and misses:
>                                 1s    10s    60s    tot
>                    ARC hits: 79228  76030  73865  74953
>                  ARC misses: 22510  22184  21647  21955
>        ARC demand data hits:     0      0      0      0
>      ARC demand data misses:     4      7      8      7
>    ARC demand metadata hits: 79230  76030  73865  74953
>  ARC demand metadata misses: 22506  22177  21639  21948
>                 ZFETCH hits:    47     29     32     31
>               ZFETCH misses:101669  98138  95433  96830
>
> Cache efficiency percentage:
>                         10s    60s    tot
>                 ARC:  77.41  77.34  77.34
>     ARC demand data:   0.00   0.00   0.00
> ARC demand metadata:  77.42  77.34  77.35
>              ZFETCH:   0.03   0.03   0.03
>
> ### Question :
>
> I don't understand why I have so many ARC misses. There is no other
> activity on the server (as soon as I stop the find loop, no more ARC
> hits). As soon as the first find loop is done, there is no more disk
> activity (according to zpool instate -v 1), no read/write operations
> on mypool.
 > So I'm pretty sure all metadata comes from ARC.
 > So why are there so many ARC misses ?

Running zfs-mon on my desktop, I seem to get similar results.

What I am seeing leads me to think that not all metadata is cached,
maybe filename isn't cached, which can be a large string.

while [ 1 ]; do find /usr/ports > /dev/null; done

will list the path to every file and I see about 2 hits to a miss, yet

while [ 1 ]; do ls -lR /usr/ports > /dev/null; done

lists every filename as well as it's size, mod date, owner, permissions
and it sits closer to 4 hits to every miss.

And if the system disk cache contains the filenames that zfs isn't 
caching we won't need disk access to get the zfs misses.

-- 
FreeBSD - the place to B...Storing Data

Shane Ambler




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?a89defa1-4335-70b2-25d5-ca43626c844d>