Date: Tue, 14 Jul 2015 07:01:47 -0500 From: Karl Denninger <karl@denninger.net> To: freebsd-stable@freebsd.org Subject: Re: FreeBSD 10.1 Memory Exhaustion Message-ID: <55A4FA2B.5050903@denninger.net> In-Reply-To: <55A4BF06.4060505@ShaneWare.Biz> References: <CAB2_NwCngPqFH4q-YZk00RO_aVF9JraeSsVX3xS0z5EV3YGa1Q@mail.gmail.com> <CAJ-Vmom58SjgOG7HYPE4MVaB=XPaEkx_OTYgvOTHxwqGnTxtug@mail.gmail.com> <55A3F9E1.9090901@denninger.net> <55A4BF06.4060505@ShaneWare.Biz>
next in thread | previous in thread | raw e-mail | index | archive | help
[-- Attachment #1 --]
On 7/14/2015 02:49, Shane Ambler wrote:
> On 14/07/2015 03:18, Karl Denninger wrote:
>
>> The ARC is supposed to auto-size and use all available free memory. The
>> problem is that the VM system and ARC system both make assumptions that
>> under certain load patterns fight with one another, and when this
>> happens and ARC wins the system gets in trouble FAST. The pattern is
>> that the system will start to page RSS out rather than evict ARC, ARC
>> will fill the freed space, it pages more RSS out..... you see where this
>> winds up heading yes?
>>
>
> Something I noticed was that vfs.zfs.arc_free_target is smaller
> than vm.v_free_target
>
> on my desktop with 8GB I get
> vfs.zfs.arc_free_target: 14091
> vm.v_free_target: 43195
>
> Doesn't that cause arc allocation to trigger swapping leaving space
> for arc allocation....
>
Yes and no.
On my system with the patch:
vm.v_free_target: 130312
vm.stats.vm.v_free_target: 130312
vfs.zfs.arc_free_target: 86375
and...
[karl@NewFS ~]$ pstat -s
Device 1K-blocks Used Avail Capacity
/dev/mirror/sw.eli 67108860 0 67108860 0%
No swapping :-)
It's not busy right now, but this is what the system looks like at the
moment...
1 users Load 0.22 0.28 0.32 Jul 14 06:57
Mem:KB REAL VIRTUAL VN PAGER SWAP
PAGER
Tot Share Tot Share Free in out
in out
Act 2009856 39980 7884504 92820 937732 count
All 17499k 52212 8727248 381980 pages
Proc: Interrupts
r p d s w Csw Trp Sys Int Sof Flt ioflt 2638 total
2 251 1 9264 3332 3982 1134 181 2174 1134 cow 11
uart0 4
830 zfod
pcm0 17
0.4%Sys 0.1%Intr 0.8%User 0.0%Nice 98.8%Idle ozfod
ehci0 uhci
| | | | | | | | | | %ozfod
uhci1 21
> daefr 508
uhci3 ehci
dtbuf 1612 prcfr 991
cpu0:timer
Namei Name-cache Dir-cache 485859 desvn 3105 totfr 139
mps0 256
Calls hits % hits % 161014 numvn react 43
em0:rx 0
7109 7026 99 121460 frevn pdwak 77
em0:tx 0
459 pdpgs
em0:link
Disks da1 da2 da3 da4 da5 da6 da7 intrn 192
em1:rx 0
KB/t 0.00 11.41 10.84 11.68 11.60 0.00 0.00 21089128 wire 165
em1:tx 0
tps 0 21 24 22 21 0 0 1153712 act
em1:link
MB/s 0.00 0.23 0.25 0.25 0.24 0.00 0.00 1281556 inact 32
cpu1:timer
%busy 0 4 5 5 4 0 0 20372 cache 25
cpu9:timer
916480 free 39
cpu4:timer
buf 32
cpu13:time
22
cpu2:timer
33
cpu11:time
28
cpu3:timer
30
cpu14:time
35
cpu5:timer
37
cpu12:time
71
cpu7:timer
26
cpu10:time
26
cpu6:timer
28
cpu8:timer
48
cpu15:time
Most of that wired memory is in ARC...
------------------------------------------------------------------------
ZFS Subsystem Report Tue Jul 14 07:00:29 2015
------------------------------------------------------------------------
ARC Summary: (HEALTHY)
Memory Throttle Count: 0
ARC Misc:
Deleted: 53.54m
Recycle Misses: 15.12m
Mutex Misses: 6.63k
Evict Skips: 275.51m
ARC Size: 75.59% 16.88 GiB
Target Size: (Adaptive) 75.73% 16.91 GiB
Min Size (Hard Limit): 12.50% 2.79 GiB
Max Size (High Water): 8:1 22.33 GiB
ARC Size Breakdown:
Recently Used Cache Size: 58.52% 9.89 GiB
Frequently Used Cache Size: 41.48% 7.01 GiB
ARC Hash Breakdown:
Elements Max: 1.72m
Elements Current: 58.40% 1.00m
Collisions: 50.07m
Chain Max: 8
Chains: 119.31k
------------------------------------------------------------------------
ARC Efficiency: 2.01b
Cache Hit Ratio: 81.50% 1.64b
Cache Miss Ratio: 18.50% 371.70m
Actual Hit Ratio: 79.46% 1.60b
Data Demand Efficiency: 83.00% 1.60b
Data Prefetch Efficiency: 15.11% 21.33m
CACHE HITS BY CACHE LIST:
Anonymously Used: 1.79% 29.34m
Most Recently Used: 6.36% 104.08m
Most Frequently Used: 91.14% 1.49b
Most Recently Used Ghost: 0.09% 1.40m
Most Frequently Used Ghost: 0.62% 10.17m
CACHE HITS BY DATA TYPE:
Demand Data: 81.12% 1.33b
Prefetch Data: 0.20% 3.22m
Demand Metadata: 16.06% 262.92m
Prefetch Metadata: 2.62% 42.89m
CACHE MISSES BY DATA TYPE:
Demand Data: 73.17% 271.97m
Prefetch Data: 4.87% 18.11m
Demand Metadata: 17.75% 65.97m
Prefetch Metadata: 4.21% 15.65m
------------------------------------------------------------------------
--
Karl Denninger
karl@denninger.net <mailto:karl@denninger.net>
/The Market Ticker/
/[S/MIME encrypted email preferred]/
[-- Attachment #2 --]
0 *H
010 + 0 *H
_0[0C)0
*H
010 UUS10UFlorida10U Niceville10U
Cuda Systems LLC10UCuda Systems LLC CA1"0 *H
Cuda Systems LLC CA0
150421022159Z
200419022159Z0Z10 UUS10UFlorida10U
Cuda Systems LLC10UKarl Denninger (OCSP)0"0
*H
0
X@vkY
Tq/vE]5#֯MX\8LJ/V?5Da+
sJc*/r{ȼnS+ w")ąZ^DtdCOZ ~7Q '@a#ijc۴oZdB&!Ӝ-< ?HN5y
5}F|ef"Vلio74zn">a1qWuɖbFeGE&3(KhixG3!#e_XƬϜ/,$+;4y'Bz<qT9_?rRUpn5
Jn&Rx/p Jyel*pN8/#9u/YPEC)TY>~/˘N[vyiDKˉ,^" ?$T8 v&K%z8C @?K{9f`+@,|Mbia 007++0)0'+0http://cudasystems.net:88880 U0 0 `HB0U0, `HB
OpenSSL Generated Certificate0U-h\Ff Y0U#0$q}ݽʒm50U0karl@denninger.net0
*H
Owbabɺx&Uk[(Oj!%p MQ0I!#QH}.>~2&D}<wm_>V6v]f>=Nn+8;q wfΰ/RLyUG#b}n!Dր_up|_ǰc/%ۥ
nN8:d;-UJd/m1~VނיnN I˾$tF1&}|?q?\đXԑ&\4V<lKۮ3%Am_(q-(cAeGX)f}-˥6cv~Kg8m~v;|9:-iAPқ6ېn-.)<[$KJtt/L4ᖣ^Cmu4vb{+BG$M0c\[MR|0FԸP&78"4p#}DZ9;V9#>Sw"[UP7100010 UUS10UFlorida10U Niceville10U
Cuda Systems LLC10UCuda Systems LLC CA1"0 *H
Cuda Systems LLC CA)0 + !0 *H
1 *H
0 *H
1
150714120147Z0# *H
1%sQ^2] c/0l *H
1_0]0 `He*0 `He0
*H
0*H
0
*H
@0+0
*H
(0 +710010 UUS10UFlorida10U Niceville10U
Cuda Systems LLC10UCuda Systems LLC CA1"0 *H
Cuda Systems LLC CA)0*H
1010 UUS10UFlorida10U Niceville10U
Cuda Systems LLC10UCuda Systems LLC CA1"0 *H
Cuda Systems LLC CA)0
*H
2tq_K྆f@jyJjw<+v!eVz@dY8^AoQ(;<
/rfF :x,nyu<q/Qm&9'~ZS0uh(Z/{נԸ~VjAJJl,#چEb N2䪟ip^nb;Rf\7BF72*2䴁l!piT\ȻW.Ȗ,ԟS8mZ'ZT}W1?oPGf6x6(ilnΪ.'_
UT̀jb=Md-G i\Oݯ1z*>(]0gFũ/S>$)QH#`AۼDT}+s%E:b-]'ܗc.؝?FN)5>}b_bv
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?55A4FA2B.5050903>
