Date: Wed, 5 Nov 2014 08:44:40 +0000 From: =?utf-8?B?S2FybGkgU2rDtmJlcmc=?= <Karli.Sjoberg@slu.se> To: "freebsd-fs@freebsd.org" <freebsd-fs@freebsd.org> Subject: Differences in memory handling on systems with/out cache drives Message-ID: <5F9E965F5A80BC468BE5F40576769F099DF87F57@exchange2-1>
next in thread | raw e-mail | index | archive | help
Hey all! Still investigating the intermittent lockups we are experiencing on our storage systems and have started to compare memory graphs from our Graphite monitoring system. What´s interesting about two of our systems is that they both have the same amount of RAM; 32 GB. But on one of them, I have "zpool remove"'d the cache drives from the pool and have been able to study how different their memory graphs now look like. Also worth noting is that the cache-less system nearly haven´t swapped at all (1112K) since the last stall 20 days ago, while the other system has swapped 78 MB during it´s 48 days of uptime. I´ve attached both screenshots from the two systems, with- and without cache drives, displaying a period of 12 hours. What´s most notable are the characteristic cuts that happen on the cache-less system when ZFS goes in and evicts blocks from ARC that shows as a decrease in "wired" and increase in "free", that just doesn´t happen/looks different in the system with cache drive configured in the pool. What´s your take on this? Are we hitting bug: 187594 perhaps? How can we know? -- Med Vänliga Hälsningar ------------------------------------------------------------------------------- Karli Sjöberg Swedish University of Agricultural Sciences Box 7079 (Visiting Address Kronåsvägen 8) S-750 07 Uppsala, Sweden Phone: +46-(0)18-67 15 66 karli.sjoberg@slu.se
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?5F9E965F5A80BC468BE5F40576769F099DF87F57>
