Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 12 Oct 2020 14:06:07 +0200
From:      Stefan Esser <se@freebsd.org>
To:        Allan Jude <allanjude@freebsd.org>, Matthew Macy <mmacy@freebsd.org>, FreeBSD CURRENT <freebsd-current@freebsd.org>
Subject:   OpenZFS: L2ARC shrinking over time?
Message-ID:  <3788e4d8-df6b-4096-ce58-d931583609b4@freebsd.org>

next in thread | raw e-mail | index | archive | help
This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--qBFwC29KAXWa1h0iNxXA2uULdfOExG8cV
Content-Type: multipart/mixed; boundary="EVSe7Sem6P7TU3OMxURkpOfLYPjLzLAsZ";
 protected-headers="v1"
From: Stefan Esser <se@freebsd.org>
To: Allan Jude <allanjude@freebsd.org>, Matthew Macy <mmacy@freebsd.org>,
 FreeBSD CURRENT <freebsd-current@freebsd.org>
Message-ID: <3788e4d8-df6b-4096-ce58-d931583609b4@freebsd.org>
Subject: OpenZFS: L2ARC shrinking over time?

--EVSe7Sem6P7TU3OMxURkpOfLYPjLzLAsZ
Content-Type: multipart/mixed;
 boundary="------------CA03FD83979A02E25EE78596"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------CA03FD83979A02E25EE78596
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: quoted-printable

After the switch-over to OpenZFS in -CURRENT I have observed that the=20
L2ARC shrinks over time (at a rate of 10 to 20 MB/day).

My system uses a 1 TB NVME SSD partitioned as 64 GB of SWAP (generally
unused) and 256 GB of ZFS cache (L2ARC) to speed up reads from a 3*6 TB
raidz1.

(L2ARC persistence is great, especially on a system that is used for
development and rebooted into the latest -CURRENT about once per week!)


After reboot, the full cache partition is available, but even measured
only minutes apart the reported site of the L2ARC is declining.

The following two values were obtained just 120 seconds apart:

kstat.zfs.misc.arcstats.l2_asize: 273831726080

kstat.zfs.misc.arcstats.l2_asize: 273831644160

[After finishing the text of this mail I have checked the value of
that variable another time - maybe 10 minutes have passed ...

kstat.zfs.misc.arcstats.l2_asize: 273827724288

That corresponds with some 4 MB lost over maybe 10 minutes ...]


I have first noticed this effect with the zfs-stats command updated
to support the OpenZFS sysctl variables (committed to ports a few days
ago).

After 6 days of uptime the output of "uptime; zfs-stats -L" is:


12:31PM  up 6 days, 7 mins, 2 users, load averages: 2.67, 0.73, 0.36

------------------------------------------------------------------------
ZFS Subsystem Report				Mon Oct 12 12:31:57 2020
------------------------------------------------------------------------

L2 ARC Summary: (HEALTHY)
	Low Memory Aborts:			87
	Free on Write:				5.81	k
	R/W Clashes:				0
	Bad Checksums:				0
	IO Errors:				0

L2 ARC Size: (Adaptive)				160.09	GiB
	Decompressed Data Size:			373.03	GiB
	Compression Factor:			2.33
	Header Size:			0.12%	458.14	MiB

L2 ARC Evicts:
	Lock Retries:				61
	Upon Reading:				9

L2 ARC Breakdown:				12.66	m
	Hit Ratio:			75.69%	9.58	m
	Miss Ratio:			24.31%	3.08	m
	Feeds:					495.76	k

L2 ARC Writes:
	Writes Sent:			100.00%	48.94	k

------------------------------------------------------------------------


After a reboot and with the persistent L2ARC now reported to be
available again (and filled with the expected amount of data):


13:24  up 28 mins, 2 users, load averages: 0,09 0,05 0,01

------------------------------------------------------------------------
ZFS Subsystem Report				Mon Oct 12 13:24:56 2020
------------------------------------------------------------------------

L2 ARC Summary: (HEALTHY)
	Low Memory Aborts:			0
	Free on Write:				0
	R/W Clashes:				0
	Bad Checksums:				0
	IO Errors:				0

L2 ARC Size: (Adaptive)				255.03	GiB
	Decompressed Data Size:			633.21	GiB
	Compression Factor:			2.48
	Header Size:			0.14%	901.41	MiB

L2 ARC Breakdown:				9.11	k
	Hit Ratio:			35.44%	3.23	k
	Miss Ratio:			64.56%	5.88	k
	Feeds:					1.57	k

L2 ARC Writes:
	Writes Sent:			100.00%	205

------------------------------------------------------------------------

I do not know whether this is just an accounting effect, or whether the
usable size of the L2ARC is actually shrinking, but since there is data
in the L2ARC after the reboot, I assume it is just an accounting error.

But I think this should still be researched and fixed - there might be
a wrap-around after several weeks of up-time, and if the size value
is not only used for display purposes, this might lead to unexpected
behavior.

--------------CA03FD83979A02E25EE78596--

--EVSe7Sem6P7TU3OMxURkpOfLYPjLzLAsZ--

--qBFwC29KAXWa1h0iNxXA2uULdfOExG8cV
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEo3HqZZwL7MgrcVMTR+u171r99UQFAl+ERq8FAwAAAAAACgkQR+u171r99UQl
Jgf/RnGPrA8FR++I67qcHPFdpqamtapGofcvnr4qj7yHTJUgtGG75luwmknRFTi7WYnQspdfYzJH
1dVwf8Bt/NpDCQaf3BMIwu+TNwuEGbzKXHBPhKdquSF0dZshOzg59nKQOxC7bIG9lSpEPzbeCu/8
5W4W7bJZdPZTy4B0otukkIXPTMDxZXnndyv1SC1y07+eX1UEjjTuuIeAUxo59dO0fRzd2xnu00dS
eKgoooCk7BpF4wVw3KJfM73EZOnwZJtR6VinVYo88gJSmBUNVhLOJSrwP1m6B8NOkP7qUWwdshcv
vpEKRG380PXPBXN4uI2k4MfEddqdnsAMNt+4cuEcaA==
=86nr
-----END PGP SIGNATURE-----

--qBFwC29KAXWa1h0iNxXA2uULdfOExG8cV--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3788e4d8-df6b-4096-ce58-d931583609b4>