Date: Tue, 18 Mar 2014 08:50:29 -0500 From: Karl Denninger <karl@denninger.net> To: freebsd-fs@freebsd.org Subject: Re: Reoccurring ZFS performance problems [RESOLVED] Message-ID: <53284F25.9070007@denninger.net> In-Reply-To: <53284973.8010203@netlabs.org> References: <531E2406.8010301@denninger.net> <5320A0E8.2070406@denninger.net> <5322E64E.8020009@denninger.net> <CA%2BD9QhtguPKD9zQ35246LkMt5gTU6MJ%2BZBigoznk7FHQ4R0nhA@mail.gmail.com> <CA%2BD9Qht_7dW6gahTFKz7B9%2BpgJemkFoPLLcASmPvCfGytxF8cQ@mail.gmail.com> <53236BF3.9060500@denninger.net> <CA%2BD9QhstDyetoA5HdwyMA1BOyafMJ%2BbrxDEvdNHUXFPa3YGPtg@mail.gmail.com> <c5a61ac31b121de10f9b065967fe1ae3@mail.mikej.com> <53284973.8010203@netlabs.org>
index | next in thread | previous in thread | raw e-mail
[-- Attachment #1 --] On 3/18/2014 8:26 AM, Adrian Gschwend wrote: > On 18.03.14 11:26, mikej wrote: > >> I am a little surprised this thread has been so quiet. I have been >> running with this patch and my desktop is more pleasant when memory >> demands are great - no more swapping - and wired no longer grows >> uncontrollable. >> >> Is more review coming the silence is deffining. > same here, works very nice so far and growth of memory looks much more > controlled now. Before within no time my server had all 16GB of RAM > wired, now it's growing only slowly. > > It's too early to say if my performance degradation is gone now but it > surely looks very good so far. > > Thanks again to Karl for the patch! Hope others test it and integrate it > soon. > Watch zfs-stats -A; you will see what the system has adapted to as opposed to the hard limits in arc_max and arc_min. Changes upward in reservation percentage will be almost-instantly reflected in reduced allocation, where changes downward will grow slowly (there's a timed lockdown in the cache code that prevents it from grabbing more space immediately when it was previously throttled back, and the ARC cache in general only grows when I/O that is not in the cache occurs, and thus new data becomes available to cache for later re-use.) The nice thing about the way it behaves now is that it will release memory immediately when required by other demands on the system but if your active and inactive page count shrinks as images release RAM back through the cache and then to the free list it will also be allowed to expand as I/O demand diversity warrants. That was clearly the original design intent but it was being badly frustrated by the former cache memory allocation behavior. There is an argument for not including cache pages in the "used" bucket (that is, counting them as "free" instead); the way I coded it is a bit more conservative than going the other way. Given the design of the VM subsystem either is arguably acceptable since a cache page can be freed when RAM is demanded. I decided not to do for two reasons -- first, a page that is in the cache bucket could be reactivated and if it is then you are going to have to release that ARC cache memory -- economy of action suggests that you not do something you might quickly have to undo. Second, my experience with the VM system over roughly a decade of use of FreeBSD supports an argument that the VM implementation is arguably the greatest strength that FreeBSD has, especially under stress, and by allowing it to do its job rather than trying to "push" the VM system to do a particular thing the philosophy of trusting that which is believed to know what it's up to is maintained. -- -- Karl karl@denninger.net [-- Attachment #2 --] 0 *H 010 + 0 *H O0K030 *H 010 UUS10UFlorida10U Niceville10U Cuda Systems LLC10UCuda Systems LLC CA1/0- *H customer-service@cudasystems.net0 130824190344Z 180823190344Z0[10 UUS10UFlorida10UKarl Denninger1!0 *H karl@denninger.net0"0 *H 0 bi՞]MNԿawx?`)'ҴcWgR@BlWh+ u}ApdCF JVй~FOL}EW^bچYp3K&ׂ(R lxڝ.xz?6&nsJ +1v9v/( kqĪp[vjcK%fϻe?iq]z lyzFO'ppdX//Lw(3JIA*S#՟H[f|CGqJKooy.oEuOw$/섀$삻J9b|AP~8]D1YI<"""Y^T2iQ2b yH)] Ƶ0y$_N6XqMC 9 XgώjGTP"#nˋ"Bk1 00 U0 0 `HB0U0, `HB OpenSSL Generated Certificate0U|8 ˴d[20U#0]Af4U3x&^"408 `HB+)https://cudasystems.net:11443/revoked.crl0 *H gBwH]j\x`( &gW32"Uf^. ^Iϱ k!DQA g{(w/)\N'[oRW@CHO>)XrTNɘ!u`xt5(=f\-l3<@C6mnhv##1ŃbH͍_Nq aʷ?rk$^9TIa!kh,D -ct1 00010 UUS10UFlorida10U Niceville10U Cuda Systems LLC10UCuda Systems LLC CA1/0- *H customer-service@cudasystems.net0 + ;0 *H 1 *H 0 *H 1 140318135029Z0# *H 1q oViZ0l *H 1_0]0 `He*0 `He0 *H 0*H 0 *H @0+0 *H (0 +710010 UUS10UFlorida10U Niceville10U Cuda Systems LLC10UCuda Systems LLC CA1/0- *H customer-service@cudasystems.net0*H 1010 UUS10UFlorida10U Niceville10U Cuda Systems LLC10UCuda Systems LLC CA1/0- *H customer-service@cudasystems.net0 *H DkDṱj2|O5BěΥ! -y<Dt|pi E-uHǴZ`Co9*=YVϾigw;cN'$:W:B\@V*Ig Vn'5DW }e 5P APC!xyڡ]o,D)5i[gÇﻀۏZ^[0Mq|~јjc+lARiWSDdvghA:&x{څEI.b3UV=ě`x4"Zj wᴗ3IFGazԊ<x[#2(]sNbƹ>= #mcBԠ!a/x;P? AŁJB,X;F4#K8% Nhmt=ÿȖ_RB9 #wKWb=cDaa>ʋn$X(<9ĆzʯJNhelp
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?53284F25.9070007>
