Date: Wed, 5 Nov 2014 01:15:40 -0500 From: Marcus Reid <marcus@blazingdot.com> To: Steven Hartland <killing@multiplay.co.uk> Cc: gibbs@freebsd.org, George Kola <george.kola@voxer.com>, freebsd-current@freebsd.org, Allan Jude <allanjude@freebsd.org> Subject: Re: r273165. ZFS ARC: possible memory leak to Inact Message-ID: <20141105061540.GA14812@blazingdot.com> In-Reply-To: <54591758.7000909@multiplay.co.uk> References: <1415098949.596412362.8vxee7kf@frv41.fwdcdn.com> <5458CCB6.7020602@multiplay.co.uk> <1415107358607-5962421.post@n5.nabble.com> <54590B55.3040206@freebsd.org> <54591758.7000909@multiplay.co.uk>
next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, Nov 04, 2014 at 06:13:44PM +0000, Steven Hartland wrote: > > On 04/11/2014 17:22, Allan Jude wrote: > > snip... > > Justin Gibbs and I were helping George from Voxer look at the same issue > > they are having. They had ~169GB in inact, and only ~60GB being used for > > ARC. > > > > Are there any further debugging steps we can recommend to him to help > > investigate this? > The various scripts attached to the ZS ARC behavior problem and fix PR > will help provide detail this. > https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=187594 > > I've seen it here where there's been bursts of ZFS I/O specifically > write bursts. > > What happens is that ZFS will consume large amounts of space in various > UMA zones to accommodate these bursts. If you push the vmstat -z that he provided through the arc summary script, you'll see that this is not what is happening. His uma stats match up with his arc, and do not account for his inactive memory. uma script summary: Totals oused: 5.860GB, ofree: 1.547GB, ototal: 7.407GB zused: 56.166GB, zfree: 3.918GB, ztotal: 60.084GB used: 62.026GB, free: 5.465GB, total: 67.491GB His provided top stats: Mem: 19G Active, 20G Inact, 81G Wired, 59M Cache, 3308M Buf, 4918M Free ARC: 66G Total, 6926M MFU, 54G MRU, 8069K Anon, 899M Header, 5129M Other The big uma buckets (zio_buf_16384 and zio_data_buf_131072, 18.002GB and 28.802GB respectively) are nearly 0% free. Marcus > The VM only triggers UMA reclaim when it sees pressure, however if the > main memory consumer is ZFS ARC its possible that the require pressure > will not be applied because when allocating ARC ZFS takes into account > free memory. > > The result is it will back off its memory requirements before the > reclaim is triggered leaving all the space allocated but not used. > > I was playing around with a patch, on that bug report, which added clear > down of UMA within ZFS ARC to avoid just this behavior, but its very > much me playing for testing the theory only. > > From what I've seen UMA needs something like the coloring which can be > used to trigger clear down over time to prevent UMA zones sitting their > eating large amounts of memory like they currently do. > > Regards > Steve > _______________________________________________ > freebsd-current@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-current > To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org"
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20141105061540.GA14812>