Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 19 Mar 2014 08:06:50 -0500
From:      Karl Denninger <karl@denninger.net>
To:        freebsd-fs@freebsd.org
Subject:   Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix
Message-ID:  <5329966A.60308@denninger.net>
In-Reply-To: <532992B8.4090407@netlabs.org>
References:  <201403181520.s2IFK1M3069036@freefall.freebsd.org> <53288024.2060005@denninger.net> <53288629.60309@FreeBSD.org> <532992B8.4090407@netlabs.org>

next in thread | previous in thread | raw e-mail | index | archive | help

[-- Attachment #1 --]

On 3/19/2014 7:51 AM, Adrian Gschwend wrote:
> On 18.03.14 18:45, Andriy Gapon wrote:
>
>>> This is consistent with what I and others have observed on both 9.2
>>> and 10.0; the ARC will expand until it hits the maximum configured
>>> even at the expense of forcing pages onto the swap. In this
>>> specific machine's case left to defaults it will grab nearly all
>>> physical memory (over 20GB of 24) and wire it down.
>> Well, this does not match my experience from before 10.x times.
> I reported the issue on which Karl gave feedback and developed the
> patch. The original thread of my report started here:
>
> http://lists.freebsd.org/pipermail/freebsd-fs/2014-March/019043.html
>
> Note that I don't have big memory eaters like VMs, it's just a bunch of
> jails and services running in them. Including some JVMs.
>
> Check out the munin graphs before and after:
>
> Daily which does not seem to grow much anymore now:
> http://ktk.netlabs.org/misc/munin-mem-zfs1.png
>
> Weekly:
> http://ktk.netlabs.org/misc/munin-mem-zfs2.png
>
> You can actually see where I activated the patch (16.3), the system
> behaves *much* better since then. I did one more reboot that's why it
> goes down again but since then I did not reboot anymore.
>
> The moments where munin did not report anything the system was in the
> ARC-swap lock and virtually dead. From working on the system it feels
> like a new machine, everything is super fast and snappy.
>
> I don't understand much of the discussions you guys are having but I'm
> pretty sure Karl fixed an issue which gave me headache on BSD over
> years. I first saw this in 8.x when I started to use ZFS productively
> and I've seen it in all 9.x release as well up to this patch.
>
> regards
>
> Adrian
>
I have a newer version of this patch responding to the criticisms given 
on gnats; it is being tested now.

The salient difference is that it now does two things that are a bit 
different:

1. It grabs the VM "first level" warning (vm_v_free_target), deducts 20% 
from that, and sets that as the low-RAM warning level.

2. It also allows the setting of a freemem reservation in percentage as 
an "additional" reservation (plus the low RAM warning level.)

Both are exposed via sysctl and thus can be tuned during runtime.

The reason for the change is that there is a legitimate criticism that 
the pager may allow inact pages to grow without boundary if you never 
get into the VM system's first warning level on free pages; that is, it 
is never called upon to perform page stealing.  "Never" seems like a bad 
decision (shouldn't you clean things up eventually anyway?) but it is 
what it is and the VM system has proved over time to be stable and fast, 
and for mixed workloads I can see where there could be trouble there in 
that ARC cache could be convinced to evict unnecessarily.  Unbounded 
inact page growth doesn't happen on my systems here but since it might 
and appears to be reasonably easy to defend against without causing 
other bad side effects that appears to be worth eliminating as a 
potential problem.

So instead I try to get more intelligent about choosing the arc eviction 
level; I want it into the zone where the system will steal pages back, 
but I *do not*, under any circumstance, want to allow vm.v_free_min to 
be invaded, because that's where processes asking for memory get 
**SUSPENDED** (that is, where stalls start to happen.)

Since the knobs are exposed you can get the behavior you have now if you 
want it, or you can leave it alone and
let the code choose what it thinks are intelligent values.  If you 
diddle the knobs and don't like them you can reset the percentage 
reservation to zero along with freepages and the system will pick up the 
defaults again for you in real time and without rebooting.

Also, and very importantly, I can now trivially provoke an INTENTIONAL 
stall with the knobs exposed; set the reservation down far enough (which 
effectively reverts to the system only paring cache when paging_needed 
is set as is the case with the default arc.c "as-shipped") and then 
simply copy a huge file to /dev/null (big enough to fill up the cache) 
and bang -- INSTANT 15 second stall.  Turn it back up so the ARC cache 
is not allowed to drive the system into hard paging and the problem 
disappears.

I'm going to let it run through the day today before sending it up; it 
ran overnight without problems and looks good, but I want to go through 
a heavy load period before publishing it.

I note that there are list complaints about this behavior going back to 
at least 2010.....

-- 
-- Karl
karl@denninger.net



[-- Attachment #2 --]
0	*H
010	+0	*H
O0K030
	*H
010	UUS10UFlorida10U	Niceville10U
Cuda Systems LLC10UCuda Systems LLC CA1/0-	*H
	 customer-service@cudasystems.net0
130824190344Z
180823190344Z0[10	UUS10UFlorida10UKarl Denninger1!0	*H
	karl@denninger.net0"0
	*H
0
bi՞]MNԿawx?`)'ҴcWgR@BlWh+	u}ApdCFJVй~FOL}EW^bچYp3K&ׂ(R
lxڝ.xz?6&nsJ+1v9v/(kqĪp[vjcK%fϻe?iq]z
lyzFO'ppdX//Lw(3JIA*S#՟H[f|CGqJKooy.oEuOw$/섀$삻J9b|AP~8]D1YI<"""Y^T2iQ2b	yH)]	Ƶ0y$_N6XqMC 9՘	XgώjGTP"#nˋ"Bk100	U00	`HB0U0,	`HB
OpenSSL Generated Certificate0U|8˴d[20U#0]Af4U3x&^"408	`HB+)https://cudasystems.net:11443/revoked.crl0
	*H
gBwH]j\x`(&gW32"Uf^.^Iϱ
k!DQAg{(w/)\N'[oRW@CHO>)XrTNɘ!u`xt5(=f\-l3<@C6mnhv##1ŃbH͍_Nq
aʷ?rk$^9TIa!kh,D-ct1
00010	UUS10UFlorida10U	Niceville10U
Cuda Systems LLC10UCuda Systems LLC CA1/0-	*H
	 customer-service@cudasystems.net0	+;0	*H
	1	*H
0	*H
	1
140319130650Z0#	*H
	1gIvᤨ|#w̱0l	*H
	1_0]0	`He*0	`He0
*H
0*H
0
*H
@0+0
*H
(0	+710010	UUS10UFlorida10U	Niceville10U
Cuda Systems LLC10UCuda Systems LLC CA1/0-	*H
	 customer-service@cudasystems.net0*H
	1010	UUS10UFlorida10U	Niceville10U
Cuda Systems LLC10UCuda Systems LLC CA1/0-	*H
	 customer-service@cudasystems.net0
	*H
zIÑfƧߥ\V+$=k^PA&cj /zK]A
J<CcNH|]$w&FEJXSI	)O4I5hZP"w:guRA{9A)lR[H
bmFt$Ya8BXUgeGuI!Zm򐂆X5r3s)b(2%OՌ!_רAYh~5Vw	CxB<
r=.ct[3;Un67)+m^E{aX@nmKC/97a.2^
W1A+F1aCp>:ن-x7ZA㌞m}ClĤ5&"YMsu$wTU
$Y}thZ#H܄<KB|o3B=Vހ#fNZ	gA+F<&S

Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?5329966A.60308>