Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 05 Sep 2010 20:21:49 -0400
From:      jhell <jhell@DataIX.net>
To:        Steven Hartland <killing@multiplay.co.uk>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: zfs very poor performance compared to ufs due to lack of cache?
Message-ID:  <4C84341D.8060708@DataIX.net>
In-Reply-To: <330B5DB2215F43899ABAEC2CF71C2EE0@multiplay.co.uk>
References:  <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk><AANLkTi=6bta-Obrh2ejLCHENEbhV5stbMsvfek3Ki4ba@mail.gmail.com><4C825D65.3040004@DataIX.net> <7EA7AD058C0143B2BF2471CC121C1687@multiplay.co.uk> <1F64110BFBD5468B8B26879A9D8C94EF@multiplay.co.uk> <4C83A214.1080204@DataIX.net> <06B9D23F202D4DB88D69B7C4507986B7@multiplay.co.uk> <4C842905.2080602@DataIX.net> <330B5DB2215F43899ABAEC2CF71C2EE0@multiplay.co.uk>

next in thread | previous in thread | raw e-mail | index | archive | help
On 09/05/2010 19:57, Steven Hartland wrote:
> 
>> On 09/05/2010 16:13, Steven Hartland wrote:
>>>> 3656:  uint64_t available_memory =
>>>> ptoa((uintmax_t)cnt.v_free_count 3657:      +
>>>> cnt.v_cache_count);
>> 
>>> earlier at 3614 I have what I think your after which is: uint64_t
>>> available_memory = ptoa((uintmax_t)cnt.v_free_count);
>> 
>> Alright change this to the above, recompile and re-run your tests. 
>> Effectively before this change that apparently still needs to be
>> MFC'd or MFS'd would not allow ZFS to look at or use
>> cnt.v_cache_count. Pretty much to sum it up "available mem = cache
>> + free"
>> 
>> This possibly could cause what your seeing but there might be
>> other changes still yet TBD. Ill look into what else has changed
>> from RELEASE -> STABLE.
>> 
>> Also do you check out your sources with svn(1) or csup(1) ?
> 
> Based on Jeremy's comments I'm updating the box the stable. Its
> building now but will be the morning before I can reboot to activate
> changes as I need to deactivate the stream instance and wait for all
> active connections to finish.
> 
> That said the problem doesn't seem to be cache + free but more cache
> + free + inactive with inactive being the large chunk, so not sure
> this change would make any difference?
> 

If I remember correctly I thought that was already calculated into the
mix but I could be wrong. I remember a discussion about it before that
free was inactive + free, and for ARC the cache was never being
accounted for so not enough paging was happening which would result in a
situation like the one you have now. MAYBE!

> How does ufs deal with this, does it take inactive into account?
> Seems a bit silly for inactive pages to prevent reuse for extended
> periods when the memory could be better used as cache.
> 

I agree commented above.

> As an experiment I compiled a little app which malloced a large block
> of memory, 1.3G in this case and then freed it. This does indeed pull
> the memory out of inactive and back into the free pool where zfs is
> which happy to re-expand arc and once again cache large files. Seems
> a bit extreme to have to do this though.

Maybe we should add that code to zfs(1) and call it with
gimme-my-mem-back 1 for all of it 2 for half of it and 3 for panic ;)

> 
> Will see what happens with stable tomorrow though :)
> 

Good luck Steve, Look forward to hearing the result. If you are happy
with the result you get from stable/8 I would reccommend patching to v15
which is much more stable than the v14 code.

The specific patches you would want are: (in order)
http://people.freebsd.org/~mm/patches/zfs/v15/stable-8-v15.patch
http://people.freebsd.org/~mm/patches/zfs/zfs_metaslab_v2.patch
http://people.freebsd.org/~mm/patches/zfs/zfs_abe_stat_rrwlock.patch
and then the needfree.patch I already posted.

The maxusers.patch being optional.


-- 

 jhell,v



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4C84341D.8060708>