Date: Mon, 16 Feb 2015 00:52:17 +0000 From: Steven Hartland <killing@multiplay.co.uk> To: freebsd-stable@freebsd.org Subject: Re: ZFS L2arc 16.0E size Message-ID: <54E13F41.7000703@multiplay.co.uk> In-Reply-To: <54E1388C.3060602@searchy.net> References: <54E1388C.3060602@searchy.net>
next in thread | previous in thread | raw e-mail | index | archive | help
IIRC this was fixed by r273060, if your remove your cache device and then add it back I think you should be good. On 16/02/2015 00:23, Frank de Bot (lists) wrote: > Hello, > > I have a FreeBSD 10.1 system with a raidz2 zfs configuration with 2ssd's > for l2arc . It is running '10.1-STABLE FreeBSD 10.1-STABLE #0 r278805' > Currently I'm running tests before it can go to production, but I have > the following issue. After a while the l2arc devices indicate 16.0E free > space and it starts 'consuming' more than it can hold > > cache - - - - - - > gpt/l2arc1 107G 16.0E 0 2 0 92.7K > gpt/l2arc2 68.3G 16.0E 0 1 0 60.8K > > It ran good for a while, where data was removed from cache so it could > be filled with newer data. (Free space was always around 200/300Mbytes). > > I've read about similar issues, which should be fixed in different > commits, but I'm running the latest stable 10.1 kernel right now. (One > of the last similar issue is: > https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=197164 ) > Another similar issue reported at FreeNAS > https://bugs.freenas.org/issues/5347 suggested it would be a hardware > issue, but I have 2 servers which experience the same problem. One has a > Crucial M500 drive and the other a M550. Both have a 64G partition voor > l2arc. > > What is really going on here? > > > Regards, > > > Frank de Bot > _______________________________________________ > freebsd-stable@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org"
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?54E13F41.7000703>