Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 11 Aug 2010 15:10:12 -0700
From:      Artem Belevich <fbsdlist@src.cx>
To:        freebsd-fs@freebsd.org
Subject:   Re: zfs arc - just take it all and be good to me
Message-ID:  <AANLkTikrY7q8uH9gF1-_Bg1SJ7YKM8y-ti_FGo0qUC=h@mail.gmail.com>
In-Reply-To: <20100811214302.GB44635@tolstoy.tols.org>
References:  <20100810214418.GA28288@tolstoy.tols.org> <20100811014919.GA52992@icarus.home.lan> <20100811192537.GA44635@tolstoy.tols.org> <AANLkTin-YvEzoN-ThwwDAqn2mWFMD4-7BnP8N95EqTk0@mail.gmail.com> <20100811214302.GB44635@tolstoy.tols.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Wed, Aug 11, 2010 at 2:43 PM, Marco van Tol <marco@tols.org> wrote:
>> There's a hack floating around that attempts to force kernel into
>> freeing up memory from inactive/cache lists before draining ARC. It
>> does help a bit with this issue, but it's still a hack.
>
> That makes sense Artem, thanks. =A0I think you mean the posts with the
> perl one-liner I used in my tests as well. =A0(perl variable assignment o=
f
> 1.5GB in the posts their version)

Well, that perl one-liner is somewhat similar to using guillotine to
cure a minor headache.

I was actually referring to the patch mentioned in this post:
http://old.nabble.com/Re%3A-Serious-zfs-slowdown-when-mixed-with-another-fi=
le-system-(ufs-msdosfs-etc.).-p29137467.html

The patch itself is here: http://pastebin.com/ZCkzkWcs

> I had seen the posts that mentioned that one and decided to remember the
> perl hack. :)
>
> What I understand from it:
> - In a UFS/ZFS mixed system
> - In a scenario where UFS "page cache" took (almost) all available memory
> - Run a perl one-liner to throw out the UFS active/inactive usage
> - Kind-off hope you do enough relevant ZFS accesses that you get a good
> =A0new situation.
>
> So, if my worries can shift from fighting with kmem_size and arc_max to
> fighting with arc_min, that's a fight I like a lot better. =A0Especially
> on zfs-only systems, I have to admit.

Non-ZFS filesystems are not the only entities that cause memory to end
up on inactive list. It may end up there for number of other reasons.
For instance on my ZFS-only box I currently have ~1G worth of it.
Filesystem just happens to be the entity that often stashes a lot of
data on inactive list and that causes immediate issues for ZFS.

--Artem



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?AANLkTikrY7q8uH9gF1-_Bg1SJ7YKM8y-ti_FGo0qUC=h>