Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 16 Feb 2015 09:58:22 +0100
From:      Frank de Bot <lists@searchy.net>
To:        freebsd-stable@freebsd.org
Subject:   Re: ZFS L2arc 16.0E size
Message-ID:  <54E1B12E.9070809@searchy.net>
In-Reply-To: <54E13F41.7000703@multiplay.co.uk>
References:  <54E1388C.3060602@searchy.net> <54E13F41.7000703@multiplay.co.uk>

next in thread | previous in thread | raw e-mail | index | archive | help
I did remove and added devices again, with:

'zpool remove tank gpt/l2arc1 gpt/l2arc2'
and then
'zpool add tank cache gpt/l2arc1 gpt/l2arc2'

I left it running overnight and the same situation occured.

cache                        -      -      -      -      -      -
  gpt/l2arc1              175G  16.0E     11    106  55.5K  9.75M
  gpt/l2arc2              167G  16.0E     14    107  68.8K  9.81M

For faster filling of the l2arc I also had 2 systcl's set:

vfs.zfs.l2arc_write_max: 33554432
vfs.zfs.l2arc_write_boost: 33554432

I do not plan to use this for production, only in testing.


Regards,

Frank de Bot
Steven Hartland wrote:
> IIRC this was fixed by r273060, if your remove your cache device and
> then add it back I think you should be good.
> 
> On 16/02/2015 00:23, Frank de Bot (lists) wrote:
>> Hello,
>>
>> I have a FreeBSD 10.1 system with a raidz2 zfs configuration with 2ssd's
>> for l2arc . It is running '10.1-STABLE FreeBSD 10.1-STABLE #0 r278805'
>> Currently I'm running tests before it can go to production, but I have
>> the following issue. After a while the l2arc devices indicate 16.0E free
>> space and it starts 'consuming' more than it can hold
>>
>> cache                        -      -      -      -      -      -
>>    gpt/l2arc1              107G  16.0E      0      2      0  92.7K
>>    gpt/l2arc2             68.3G  16.0E      0      1      0  60.8K
>>
>> It ran good for a while, where data was removed from cache so it could
>> be filled with newer data. (Free space was always around 200/300Mbytes).
>>
>> I've read about similar issues, which should be fixed in different
>> commits, but I'm running the latest stable 10.1 kernel right now. (One
>> of the last similar issue is:
>> https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=197164 )
>> Another similar issue reported at FreeNAS
>> https://bugs.freenas.org/issues/5347 suggested it would be a hardware
>> issue, but I have 2 servers which experience the same problem. One has a
>> Crucial M500 drive and the other a M550. Both have a 64G partition voor
>> l2arc.
>>
>> What is really going on here?
>>
>>
>> Regards,
>>
>>
>> Frank de Bot
>> _______________________________________________
>> freebsd-stable@freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
>> To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org"
> 
> _______________________________________________
> freebsd-stable@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org"
> 




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?54E1B12E.9070809>