From owner-freebsd-current@FreeBSD.ORG Thu Oct 10 16:45:52 2013 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id EF1A01E2 for ; Thu, 10 Oct 2013 16:45:52 +0000 (UTC) (envelope-from freebsd@allanjude.com) Received: from mx1.scaleengine.net (beauharnois2.bhs1.scaleengine.net [142.4.218.15]) by mx1.freebsd.org (Postfix) with ESMTP id C978A2A6C for ; Thu, 10 Oct 2013 16:45:52 +0000 (UTC) Received: from [10.1.1.1] (S01060001abad1dea.hm.shawcable.net [50.70.108.129]) (Authenticated sender: allan.jude@scaleengine.com) by mx1.scaleengine.net (Postfix) with ESMTPSA id C9D7821759 for ; Thu, 10 Oct 2013 16:45:49 +0000 (UTC) Message-ID: <5256D9D4.2030308@allanjude.com> Date: Thu, 10 Oct 2013 12:46:12 -0400 From: Allan Jude User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.0 MIME-Version: 1.0 To: freebsd-current@freebsd.org Subject: Re: ZFS L2ARC - incorrect size and abnormal system load on r255173 References: <1381166916.122992963.5h9ygiri@frv45.ukr.net> <1381170764.32684.31088349.343931EE@webmail.messagingengine.com> <20131007185032.GA82932@hell.ukr.net> <20131007211201.GA89306@hell.ukr.net> <20131010092223.GA28347@hell.ukr.net> In-Reply-To: <20131010092223.GA28347@hell.ukr.net> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 10 Oct 2013 16:45:53 -0000 On 2013-10-10 05:22, Vitalij Satanivskij wrote: > Same situation hapend yesterday again :( > > What's confuse me while trying to understend where I'm wrong > > > Firt some info. > > We have zfs pool "POOL" and one more zfs on it "POOL/zfs" > > POOL - have only primarycache enabled "ALL" > POOL/zfs - have both primay and secondary for "ALL" > > POOL have compression=lz4 > > POOL/zfs have none > > > POOL - have around 9TB data > > POOL/zfs - have 1TB > > Secondary cache have configuration - > > cache > gpt/cache0 ONLINE 0 0 0 > gpt/cache1 ONLINE 0 0 0 > gpt/cache2 ONLINE 0 0 0 > > gpt/cache0-2 it's intel sdd SSDSC2BW180A4 180gb > > So full real size for l2 is 540GB (realy 489gb) > > First question - data on l2arc will be compressed on not? > > Second in stats we see > > L2 ARC Size: (Adaptive) 2.08 TiB > > eary it was 1.1 1.4 ... > > So a) how cache can be biger than zfs it self > b) in case it's not compressed (answer for first question) how it an be biger than real ssd size? > > > one more coment if l2 arc size grove above phisical sizes I se next stats > > kstat.zfs.misc.arcstats.l2_cksum_bad: 50907344 > kstat.zfs.misc.arcstats.l2_io_error: 4547377 > > and growing. > > > System is r255173 with patch from rr255173 > > > At last maybe somebody have any ideas what's realy hapend... > > > > > > Vitalij Satanivskij wrote: > VS> > VS> One more question - > VS> > VS> we have two counter - > VS> > VS> kstat.zfs.misc.arcstats.l2_size: 1256609410560 > VS> kstat.zfs.misc.arcstats.l2_asize: 1149007667712 > VS> > VS> can anybody explain how to understand them i.e. l2_asize - real used space on l2arc an l2_size - uncompressed size, > VS> > VS> or maybe something else ? > VS> > VS> > VS> > VS> Vitalij Satanivskij wrote: > VS> VS> > VS> VS> Data on pool have compressratio around 1.4 > VS> VS> > VS> VS> On diferent servers with same data type and load L2 ARC Size: (Adaptive) can be diferent > VS> VS> > VS> VS> for example 1.04 TiB vs 1.45 TiB > VS> VS> > VS> VS> But it's all have same porblem - grow in time. > VS> VS> > VS> VS> > VS> VS> More stange for us - > VS> VS> > VS> VS> ARC: 80G Total, 4412M MFU, 5040M MRU, 76M Anon, 78G Header, 2195M Other > VS> VS> > VS> VS> 78G header size and ubnormal - > VS> VS> > VS> VS> kstat.zfs.misc.arcstats.l2_cksum_bad: 210920592 > VS> VS> kstat.zfs.misc.arcstats.l2_io_error: 7362414 > VS> VS> > VS> VS> sysctl's growing avery second. > VS> VS> > VS> VS> All part's of server (as hardware part's) in in normal state. > VS> VS> > VS> VS> After reboot no problem's for some period untile cache size grow to some limit. > VS> VS> > VS> VS> > VS> VS> > VS> VS> Mark Felder wrote: > VS> VS> MF> On Mon, Oct 7, 2013, at 13:09, Dmitriy Makarov wrote: > VS> VS> MF> > > VS> VS> MF> > How can L2 ARC Size: (Adaptive) be 1.44 TiB (up) with total physical size > VS> VS> MF> > of L2ARC devices 490GB? > VS> VS> MF> > > VS> VS> MF> > VS> VS> MF> http://svnweb.freebsd.org/base?view=revision&revision=251478 > VS> VS> MF> > VS> VS> MF> L2ARC compression perhaps? > VS> VS> MF> _______________________________________________ > VS> VS> MF> freebsd-current@freebsd.org mailing list > VS> VS> MF> http://lists.freebsd.org/mailman/listinfo/freebsd-current > VS> VS> MF> To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org" > VS> VS> _______________________________________________ > VS> VS> freebsd-current@freebsd.org mailing list > VS> VS> http://lists.freebsd.org/mailman/listinfo/freebsd-current > VS> VS> To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org" > VS> _______________________________________________ > VS> freebsd-current@freebsd.org mailing list > VS> http://lists.freebsd.org/mailman/listinfo/freebsd-current > VS> To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org" > _______________________________________________ > freebsd-current@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-current > To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org" Some background on L2ARC compression for you: http://wiki.illumos.org/display/illumos/L2ARC+Compression http://svnweb.freebsd.org/base?view=revision&revision=251478 Are you sure that compression on pool/zfs is off? it would normally inherit from the parent, so double check with: zfs get compression pool/zfs Is the data on pool/zfs related to the data on the root pool? if pool/zfs were a clone, and the data is actually used in both places, the newer 'single copy ARC' feature may come in to play: https://www.illumos.org/issues/3145 -- Allan Jude