From owner-freebsd-current@FreeBSD.ORG Mon Oct 21 12:51:36 2013 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 4055D45F; Mon, 21 Oct 2013 12:51:36 +0000 (UTC) (envelope-from satan@ukr.net) Received: from hell.ukr.net (hell.ukr.net [212.42.67.68]) (using TLSv1.2 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id EC1982603; Mon, 21 Oct 2013 12:51:35 +0000 (UTC) Received: from satan by hell.ukr.net with local ID 1VYExJ-0003Qp-DE ; Mon, 21 Oct 2013 15:51:33 +0300 Date: Mon, 21 Oct 2013 15:51:33 +0300 From: Vitalij Satanivskij To: Steven Hartland Subject: Re: ZFS secondarycache on SSD problem on r255173 Message-ID: <20131021125133.GA13109@hell.ukr.net> References: <20131017073925.GA34958@hell.ukr.net> <2AFE1CBD9B124E3AB9E05A4E483CCE03@multiplay.co.uk> <20131018080148.GA75226@hell.ukr.net> <256B2E5A0BA44DCBB45BB3F3E820E190@multiplay.co.uk> <20131018144524.GA30018@hell.ukr.net> <4459A6FAB7B8445C97CCB9EFF34FD4F0@multiplay.co.uk> <20131019085547.GA33582@hell.ukr.net> <6917E0AC86C444EFB3B55750175BADED@multiplay.co.uk> <20131021123212.GA11886@hell.ukr.net> <73D8549BA8BE4920A1E67DC88E449E1B@multiplay.co.uk> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <73D8549BA8BE4920A1E67DC88E449E1B@multiplay.co.uk> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: Vitalij Satanivskij , "Justin T. Gibbs" , freebsd-current@freebsd.org, Borja Marcos , Dmitriy Makarov X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 21 Oct 2013 12:51:36 -0000 Steven Hartland wrote: SH> So previously you only started seeing l2 errors after there was SH> a significant amount of data in l2arc? Thats interesting in itself SH> if thats the case. Yes someting arround 200+gb SH> I wonder if its the type of data, or something similar. Do you SH> run compression on any of your volumes? SH> zfs get compression Just now testing goes on next configuration first zfs is top level pool calling disk1 have enable lz4 compression and secondarycache = metadata next zfs is disk1/data with compression=off and secondarycache = all Error was seen on confiruration like that and on configuration where was seted as secondarycache = none for disk1 (disk1/data still fully cached) SH> Regards SH> Steve SH> ----- Original Message ----- SH> From: "Vitalij Satanivskij" SH> SH> SH> > SH> > Just now I cannot say, as to triger problem we need at last 200+gb size on l2arc wich usually grow in one production day. SH> > SH> > But for some reason today in the morning server was rebooted so cache was flushed and now only 100Gb. SH> > SH> > Need to wait some more time. SH> > SH> > At last for now none error on l2. SH> SH> SH> ================================================ SH> This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. SH> SH> In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 SH> or return the E.mail to postmaster@multiplay.co.uk. SH> SH> _______________________________________________ SH> freebsd-current@freebsd.org mailing list SH> http://lists.freebsd.org/mailman/listinfo/freebsd-current SH> To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org"