Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 23 Sep 2001 03:40:33 -0700 (PDT)
From:      Matt Dillon <dillon@earth.backplane.com>
To:        Poul-Henning Kamp <phk@critter.freebsd.dk>
Cc:        David Greenman <dg@root.com>, Seigo Tanimura <tanimura@r.dl.itc.u-tokyo.ac.jp>, bright@wintelcom.net, hackers@FreeBSD.ORG
Subject:   Re: Conclusions on... was Re: More on the cache_purgeleafdirs() routine 
Message-ID:  <200109231040.f8NAeXw86352@earth.backplane.com>
References:   <96469.1001237641@critter>

next in thread | previous in thread | raw e-mail | index | archive | help

:>    VM Page Cache, and thus not be candidates for reuse anyway.  So my patch
:>    has a very similar effect but without the overhead.
:
:Back when I rewrote the VFS namecache back in 1997 I added that
:clause because I saw directories getting nuked in no time because
:there were no pages holding on to them (device nodes were even worse!)
:
:So refresh my memory here, does directories get pages cached in VM if
:you have vfs.vmiodirenable=0 ?  
:
:What about !UFS filesystems ?  Do they show a performance difference ?
:
:Also, don't forget that if the VM system gave preferential caching to
:directory pages, we wouldn't need the VFS-cache very much in the first
:place...
:
:-- 
:Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20

    Ah yes, vmiodirenable.  We should just turn it on by default now.  I've
    been waffling too long on that.  With it off the buffer cache will 
    remember at most vfs.maxmallocspace worth of directory data (read: not
    very much), and without VMIO backing, which means vnodes could be
    reclaimed immediately.  Ah!  Now I see why that clause was put
    in... but it's obsolete now if vmiodirenable is turned on, and it
    doesn't scale well to large-memory machines if it is left in.

    If we turn vmiodirenable on then directory blocks get cached by the 
    VM system.  There is no preferential treatment of directory blocks
    but there doesn't need to be, the VM system does a very good job figuring
    out which blocks to keep and which not to.

    vfs.vmiodirenable=0 works well for small lightly loaded systems but
    doesn't scale at all.  vfs.vmiodirenable=1 works well for any sized 
    system, even though there's considerable storage ineffeciencies with 
    small directories, because the VM Page algorithms compensate (and
    scale).  Small systems with fewer directories don't see the vnode 
    scaling problem because there are simply not enough directories to
    saturate the vnode/inode malloc areas.  Large systems with a greater
    number of directories blow up the vnode/inode malloc space.

    I'll run some buildworld tests tomorrow.  Er, later today.

						-Matt


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200109231040.f8NAeXw86352>