Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 22 Sep 2001 14:21:17 -0700 (PDT)
From:      Matt Dillon <dillon@earth.backplane.com>
To:        Poul-Henning Kamp <phk@critter.freebsd.dk>
Cc:        Seigo Tanimura <tanimura@r.dl.itc.u-tokyo.ac.jp>, bright@wintelcom.net, hackers@freebsd.org
Subject:   Re: More on the cache_purgeleafdirs() routine 
Message-ID:  <200109222121.f8MLLHe82202@earth.backplane.com>
References:   <88901.1001191415@critter>

next in thread | previous in thread | raw e-mail | index | archive | help
:I agree, I've never been too fond of the purgeleafdirs() code myself
:for that reason and others.
:
:If we disregard the purgeleafdirs() workaround, the current cache code
:was built around the assumption that VM page reclaims would be enough
:to keep the vnode cache flushed and any vnode which could be potentially
:useful was kept around until it wasn't.
:
:Your patch changes this to the opposite: we kill vnodes as soon as
:possible, and pick them off the freelist next time we hit them,
:if they survice that long.
:
:I think that more or less neuters the vfs cache for anything but
:open files, which I think is not in general an optimal solution
:either.
:
:I still lean towards finding a dynamic limit on the number of vnodes
:and have the cache operation act accordingly as the least generally
:lousy algorithm we can employ.
:
:Either way, I think that we should not replace the current code with
:a new algorithm until we have some solid data for it, it is a complex
:interrelationship and some serious benchmarking is needed before we
:can know what to do.
:
:In particular we need to know:
:
:	What ratio of directories are reused as a function of
:	the number of children they have in the cache.
:
:	What ratio of files are reused as a function of them
:	being open or not.
:
:	What ratio of files are being reused as a function of
:	the number of pages they have in-core.
:
:-- 
:Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
:phk@FreeBSD.ORG         | TCP/IP since RFC 956

    Well, wait a sec... all I do is zap the namei cache for the vnode.  The
    check to see if the vnode's object still has resident pages is still in
    there so I don't quite understand how I turned things around.  In my
    tests it appears to cache vnodes as long as there are resident pages
    associated with them.

    We could also throw a flag into the namei structure for a sub-directory
    count and only blow leaf nodes - which would be roughly equivalent to
    what the existing code does except without the overhead.  But I don't
    think it is necessary.

    In regards to directory reuse... well, if you thought it was complex
    before consider how complex it is now with the ufs dirhash code.  Still,
    the core of directory caching is still the buffer cache and if 
    vfs.vmiodirenable is turned on, it becomes the VM Page cache which is
    already fairly optimal.  This alone will prevent actively used directories
    from being blown out.

    And, in fact, in the tests I've done so far the system still merrily
    caches tens of thousands of vnodes while doing a 'tar cf /dev/null /'.
    I'm setting up a postfix test too.  As far as I can tell, my changes
    have had no effect on namei cache efficiency other then eating less cpu.

						-Matt


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200109222121.f8MLLHe82202>