Date: Tue, 18 Mar 2014 06:06:09 -0500 From: Karl Denninger <karl@denninger.net> To: freebsd-fs@freebsd.org Subject: Re: Reoccurring ZFS performance problems [RESOLVED] Message-ID: <532828A1.6080605@denninger.net> In-Reply-To: <c5a61ac31b121de10f9b065967fe1ae3@mail.mikej.com> References: <531E2406.8010301@denninger.net> <5320A0E8.2070406@denninger.net> <5322E64E.8020009@denninger.net> <CA%2BD9QhtguPKD9zQ35246LkMt5gTU6MJ%2BZBigoznk7FHQ4R0nhA@mail.gmail.com> <CA%2BD9Qht_7dW6gahTFKz7B9%2BpgJemkFoPLLcASmPvCfGytxF8cQ@mail.gmail.com> <53236BF3.9060500@denninger.net> <CA%2BD9QhstDyetoA5HdwyMA1BOyafMJ%2BbrxDEvdNHUXFPa3YGPtg@mail.gmail.com> <c5a61ac31b121de10f9b065967fe1ae3@mail.mikej.com>
index | next in thread | previous in thread | raw e-mail
[-- Attachment #1 --] On 3/18/2014 5:26 AM, mikej wrote: > On 2014-03-14 19:04, Matthias Gamsjager wrote: >> Much better thx :) >> >> Will this patch be review by some kernel devs and merged? >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > I am a little surprised this thread has been so quiet. I have been > running with this patch and my desktop is more pleasant when memory > demands are great - no more swapping - and wired no longer grows > uncontrollable. > > Is more review coming the silence is deffining. > It makes an utterly-enormous difference here. This is what one of my "nasty-busy" servers looks like this morning (it's got a very busy blog on it along with other things, and is pretty-quiet right now -- but it won't be in a couple of hours) 1 users Load 0.22 0.25 0.21 Mar 18 05:55 Mem:KB REAL VIRTUAL VN PAGER SWAP PAGER Tot Share Tot Share Free in out in out Act 4238440 31700 7953812 53652 2993908 count All 16025k 39644 8680436 249960 pages Proc: Interrupts r p d s w Csw Trp Sys Int Sof Flt ioflt 2083 total 204 7321 1498 6416 665 313 707 207 cow 12 uart0 4 428 zfod 20 uhci0 16 0.4%Sys 0.1%Intr 0.6%User 0.0%Nice 99.0%Idle ozfod pcm0 17 | | | | | | | | | | %ozfod ehci0 uhci > daefr uhci1 21 dtbuf 417 prcfr 455 uhci3 ehci Namei Name-cache Dir-cache 485892 desvn 1197 totfr 16 twa0 30 Calls hits % hits % 136934 numvn react 994 cpu0:timer 8063 8009 99 121473 frevn pdwak 42 mps0 256 871 pdpgs 15 em0:rx 0 Disks ada0 da0 da1 da2 da3 da4 da5 intrn 20 em0:tx 0 KB/t 0.00 20.46 19.92 0.00 0.00 22.06 44.21 17177460 wire em0:link tps 0 7 7 0 0 7 11 2131860 act 45 em1:rx 0 MB/s 0.00 0.15 0.15 0.00 0.00 0.15 0.47 2158808 inact 38 em1:tx 0 %busy 0 7 7 0 0 0 0 7512 cache em1:link 2986396 free ahci0:ch0 buf 16 cpu1:timer 23 cpu11:time 17 cpu5:timer 13 cpu9:timer 44 cpu4:timer 35 cpu15:time 26 cpu6:timer 16 cpu14:time 28 cpu7:timer 23 cpu13:time 23 cpu3:timer 43 cpu10:time 50 cpu2:timer 29 cpu12:time 40 cpu8:timer Here's the ARC cache.... [karl@NewFS ~]$ zfs-stats -A ------------------------------------------------------------------------ ZFS Subsystem Report Tue Mar 18 05:56:42 2014 ------------------------------------------------------------------------ ARC Summary: (HEALTHY) Memory Throttle Count: 0 ARC Misc: Deleted: 1.55m Recycle Misses: 66.33k Mutex Misses: 1.55k Evict Skips: 4.14m ARC Size: 60.01% 13.40 GiB Target Size: (Adaptive) 60.01% 13.40 GiB Min Size (Hard Limit): 12.50% 2.79 GiB Max Size (High Water): 8:1 22.33 GiB ARC Size Breakdown: Recently Used Cache Size: 79.13% 10.60 GiB Frequently Used Cache Size: 20.87% 2.80 GiB ARC Hash Breakdown: Elements Max: 1.34m Elements Current: 62.76% 840.43k Collisions: 7.02m Chain Max: 13 Chains: 247.65k ------------------------------------------------------------------------ Note the scale-down from the maximum -- this is with: [karl@NewFS ~]$ sysctl -a|grep percent vfs.zfs.arc_freepage_percent_target: 10 My test machine has a lot less memory in it and there the default (25%) appears to be a good value. Before this delta was put on the code this system would have tried to grab the entire 22GB to the exclusion of anything else. What I used to do is limit it to 16GB via arc_max which was fine in the mornings and overnight, but during the day it didn't cut it -- and there was no way to change it without a reboot either. This particular machine has 24GB of RAM in it and provides services both externally and internally (separate interfaces.) How efficient is the cache? [karl@NewFS ~]$ zfs-stats -E ------------------------------------------------------------------------ ZFS Subsystem Report Tue Mar 18 05:59:01 2014 ------------------------------------------------------------------------ ARC Efficiency: 81.13m Cache Hit Ratio: 97.84% 79.38m Cache Miss Ratio: 2.16% 1.75m Actual Hit Ratio: 69.81% 56.64m Data Demand Efficiency: 99.09% 50.37m Data Prefetch Efficiency: 28.77% 1.46m CACHE HITS BY CACHE LIST: Anonymously Used: 28.48% 22.61m Most Recently Used: 6.81% 5.40m Most Frequently Used: 64.54% 51.23m Most Recently Used Ghost: 0.03% 24.86k Most Frequently Used Ghost: 0.13% 104.39k CACHE HITS BY DATA TYPE: Demand Data: 62.88% 49.91m Prefetch Data: 0.53% 419.73k Demand Metadata: 8.28% 6.57m Prefetch Metadata: 28.31% 22.47m CACHE MISSES BY DATA TYPE: Demand Data: 26.03% 456.20k Prefetch Data: 59.29% 1.04m Demand Metadata: 9.84% 172.53k Prefetch Metadata: 4.84% 84.81k ------------------------------------------------------------------------ -- -- Karl karl@denninger.net [-- Attachment #2 --] 0 *H 010 + 0 *H O0K030 *H 010 UUS10UFlorida10U Niceville10U Cuda Systems LLC10UCuda Systems LLC CA1/0- *H customer-service@cudasystems.net0 130824190344Z 180823190344Z0[10 UUS10UFlorida10UKarl Denninger1!0 *H karl@denninger.net0"0 *H 0 bi՞]MNԿawx?`)'ҴcWgR@BlWh+ u}ApdCF JVй~FOL}EW^bچYp3K&ׂ(R lxڝ.xz?6&nsJ +1v9v/( kqĪp[vjcK%fϻe?iq]z lyzFO'ppdX//Lw(3JIA*S#՟H[f|CGqJKooy.oEuOw$/섀$삻J9b|AP~8]D1YI<"""Y^T2iQ2b yH)] Ƶ0y$_N6XqMC 9 XgώjGTP"#nˋ"Bk1 00 U0 0 `HB0U0, `HB OpenSSL Generated Certificate0U|8 ˴d[20U#0]Af4U3x&^"408 `HB+)https://cudasystems.net:11443/revoked.crl0 *H gBwH]j\x`( &gW32"Uf^. ^Iϱ k!DQA g{(w/)\N'[oRW@CHO>)XrTNɘ!u`xt5(=f\-l3<@C6mnhv##1ŃbH͍_Nq aʷ?rk$^9TIa!kh,D -ct1 00010 UUS10UFlorida10U Niceville10U Cuda Systems LLC10UCuda Systems LLC CA1/0- *H customer-service@cudasystems.net0 + ;0 *H 1 *H 0 *H 1 140318110609Z0# *H 18x?RPPEz0l *H 1_0]0 `He*0 `He0 *H 0*H 0 *H @0+0 *H (0 +710010 UUS10UFlorida10U Niceville10U Cuda Systems LLC10UCuda Systems LLC CA1/0- *H customer-service@cudasystems.net0*H 1010 UUS10UFlorida10U Niceville10U Cuda Systems LLC10UCuda Systems LLC CA1/0- *H customer-service@cudasystems.net0 *H #BAZiC'<@I<_"]pbKGHygl2}P#=n&TpF_Il?q]t 7o^Tk4beٱ ~S^V$E+(yvhD`^ȉlY͔(1Eb35>|١V ]-N%MlB֜ӟ6l<͵*2M$D}W"* +%VDQD#l\m^jY)ijNlH1Bc >e&Ws8SJmE(QgL-o&3%e04aiY ,E&K"cySH1H6x/KߦzYv%Տm/;' .<軍_!ǃyzX&t(wj$t;FGKDC};~.Iʘ\k9phome | help
Want to link to this message? Use this
URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?532828A1.6080605>
