Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 16 Jul 2018 19:20:09 +0000
From:      bugzilla-noreply@freebsd.org
To:        bugs@FreeBSD.org
Subject:   [Bug 229670] ZFS ARC limit vfs.zfs.arc_max from /boot/loader.conf is not respected
Message-ID:  <bug-229670-227-OWJjKy2gU9@https.bugs.freebsd.org/bugzilla/>
In-Reply-To: <bug-229670-227@https.bugs.freebsd.org/bugzilla/>
References:  <bug-229670-227@https.bugs.freebsd.org/bugzilla/>

next in thread | previous in thread | raw e-mail | index | archive | help
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D229670

--- Comment #7 from Leif Pedersen <leif@ofWilsonCreek.com> ---
The machines I have observed this on vary in zpool sizes.

With regard to the "rule of thumb", one machine which behaves particularly
horribly has a single zpool sized at 256GB. It has only 10GB referenced
(meaning non-snapshot data) and less than 20k inodes. A linear interpretati=
on
of the rule of thumb suggests that just 10MB should be enough ARC, although=
 I
don't expect it to scale down that low. On this one, arc_max is set to 256M=
B,
but the ARC runs well over 1 GB. I don't know how high it would go if left
alone, since it only has 2 GB of RAM to begin with, so when it gets that bi=
g I
have to reboot it. This one is an AWS VM.

For another example, I have a physical machine with 6GB of RAM, with arc_max
set to 256MB and top showing the ARC at 2GB. This one is a bit bigger -- it=
 has
1.4TB across 2 zpools. It does rsync-style backups for three other machines=
, so
there's a relatively large number of filenames. The second zpool (for the
backups) has roughly 5M inodes with roughly 70-75M filenames (up to 15 names
per inode), with most of its inodes read in a short time span. However, I've
been running this system with these backups on ZFS for years, at least as f=
ar
back as FreeBSD 9, without memory problems. While it isn't a huge system, it
was always very stable in the past.

While I don't see this issue on larger machines (with 128GB RAM or more, for
example), I don't believe this is about a minimum memory requirement for a =
few
reasons. To begin with, the machines are not insanely tiny or running with a
wildly unbalanced disk/ram ratio. Also, if there's a hard minimum requireme=
nt,
then sysctl should throw an error. Also, sysctl reports vfs.zfs.arc_meta_li=
mit
at ~67MB on both, which is much lower than arc_max.

However, I retract my remark about it maybe being from a recent update, bec=
ause
uname on the AWS machine reports 11.1-RELEASE-p4. (I often don't reboot aft=
er
updating unless the kernel has a serious vulnerability, and this one has be=
en
up for 109 days.)

Again, mine are 11.1 with the latest patches by freebsd-update. I could try
upgrading to 11.2 if it would be an interesting data point.

>The patch in review is about ARC releasing its cache...

This patch would likely help, particularly since these examples don't have
swap. It seems likely to alleviate my need to meddle with arc_max, which wo=
uld
be great. However, I'd argue that it's still a bug that arc_max is apparent=
ly
completely ignored. And now that I think about that, it's also still bug th=
at
OOM-killing processes is preferred to swap OR evacuating ARC, unless that p=
atch
is fixing that also.

I'd swear I remember that fairly recently, I tried changing arc_max and top
immediately showed the ARC chopped off at the new setting, and if I remember
that right then this is clearly a regression...but details of that memory a=
re
vague at this point.

--=20
You are receiving this mail because:
You are the assignee for the bug.=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?bug-229670-227-OWJjKy2gU9>