Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 27 Aug 2016 18:15:18 +0200
From:      Ben RUBSON <ben.rubson@gmail.com>
To:        FreeBSD FS <freebsd-fs@freebsd.org>
Subject:   Re: [ZFS] ARC accounting bug ?
Message-ID:  <71DED907-10BE-44C2-982B-12974152895D@gmail.com>
In-Reply-To: <a89defa1-4335-70b2-25d5-ca43626c844d@ShaneWare.Biz>
References:  <C2642B73-83F2-4A1C-88BE-322F376861FF@gmail.com> <a89defa1-4335-70b2-25d5-ca43626c844d@ShaneWare.Biz>

next in thread | previous in thread | raw e-mail | index | archive | help

> On 27 Aug 2016, at 07:22, Shane Ambler <FreeBSD@ShaneWare.Biz> wrote:
>=20
> On 26/08/2016 19:09, Ben RUBSON wrote:
>> Hello,
>>=20
>> Before opening a bug report, I would like to know whether what I see
>> is "normal" or not, and why.
>>=20
>> ### Test :
>>=20
>> # zfs import mypool
>> # zfs set primarycache=3Dmetadata mypool
>=20
> Well that sets the primarycache for the pool and all subsets that
> inherit the property. Do any sub filesystems have local settings?

No.

> zfs get -r primary cache mypool
>=20
> And mypool is the only zpool on the machine?

Yes.

>> # while [ 1 ]; do find /mypool/ >/dev/null; done
>>=20
>> # zfs-mon -a
>>=20
>> ZFS real-time cache activity monitor
>> Seconds elapsed: 162
>>=20
>> Cache hits and misses:
>>                                1s    10s    60s    tot
>>                   ARC hits: 79228  76030  73865  74953
>>                 ARC misses: 22510  22184  21647  21955
>>       ARC demand data hits:     0      0      0      0
>>     ARC demand data misses:     4      7      8      7
>>   ARC demand metadata hits: 79230  76030  73865  74953
>> ARC demand metadata misses: 22506  22177  21639  21948
>>                ZFETCH hits:    47     29     32     31
>>              ZFETCH misses:101669  98138  95433  96830
>>=20
>> Cache efficiency percentage:
>>                        10s    60s    tot
>>                ARC:  77.41  77.34  77.34
>>    ARC demand data:   0.00   0.00   0.00
>> ARC demand metadata:  77.42  77.34  77.35
>>             ZFETCH:   0.03   0.03   0.03
>>=20
>> ### Question :
>>=20
>> I don't understand why I have so many ARC misses. There is no other
>> activity on the server (as soon as I stop the find loop, no more ARC
>> hits). As soon as the first find loop is done, there is no more disk
>> activity (according to zpool iostat -v 1), no read/write operations
>> on mypool.
>> So I'm pretty sure all metadata comes from ARC.
>> So why are there so many ARC misses ?
>=20
> Running zfs-mon on my desktop, I seem to get similar results.

Thank you for having tested it Shane.

> What I am seeing leads me to think that not all metadata is cached,
> maybe filename isn't cached, which can be a large string.
>=20
> while [ 1 ]; do find /usr/ports > /dev/null; done
>=20
> will list the path to every file and I see about 2 hits to a miss, yet
>=20
> while [ 1 ]; do ls -lR /usr/ports > /dev/null; done
>=20
> lists every filename as well as it's size, mod date, owner, =
permissions
> and it sits closer to 4 hits to every miss.
>=20
> And if the system disk cache contains the filenames that zfs isn't =
caching we won't need disk access to get the zfs misses.

Playing with these commands :
# dtrace -n 'sdt:zfs::arc-hit {@[execname, stack()] =3D count();}'
# dtrace -n 'sdt:zfs::arc-miss {@[execname, stack()] =3D count();}'

We can see that these are readdir calls which produce arc-misses, and =
that readdir calls also produce arc-hits.

It would be interesting to know why some lead to hits, and some lead to =
misses.

(note that ls -lR / rsync commands produces exactly the same dtrace =
results/numbers as find command)

Ben




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?71DED907-10BE-44C2-982B-12974152895D>