Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 12 Apr 2001 10:57:19 -0700 (PDT)
From:      Matt Dillon <dillon@earth.backplane.com>
To:        Poul-Henning Kamp <phk@critter.freebsd.dk>
Cc:        Rik van Riel <riel@conectiva.com.br>, David Xu <bsddiy@21cn.com>, freebsd-hackers@FreeBSD.ORG
Subject:   Re: vm balance 
Message-ID:  <200104121757.f3CHvJd20639@earth.backplane.com>
References:   <57992.987097362@critter>

next in thread | previous in thread | raw e-mail | index | archive | help
:You should also know that negative entries, since they have no
:objects to "hang from" and consequently would clog up the name-cache,
:are limited by the sysctl:
:	debug.ncnegfactor: 16
:which means that max 1/16 of the name cache entries can be negative
:entries.  You can monitor the number of negative entries with the
:sysctl
:	debug.numneg: 305
:
:the value of "16" was rather arbitrarily chosen and better defaults
:may exist.
:
:--
:Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
:phk@FreeBSD.ORG         | TCP/IP since RFC 956

    Here's an example from a lightly loaded machine that's been up about
    two months (since I last upgraded its kernel):

earth:/home/dillon> sysctl -a | fgrep vfs.cache
vfs.cache.numneg: 1596
vfs.cache.numcache: 30557
vfs.cache.numcalls: 352196140
vfs.cache.dothits: 5598866
vfs.cache.dotdothits: 14055093
vfs.cache.numchecks: 435747692
vfs.cache.nummiss: 29963655
vfs.cache.nummisszap: 3042073
vfs.cache.numposzaps: 3308219
vfs.cache.numposhits: 274527703
vfs.cache.numnegzaps: 939714
vfs.cache.numneghits: 20760817
vfs.cache.numcwdcalls: 215565
vfs.cache.numcwdfail1: 29
vfs.cache.numcwdfail2: 1730
vfs.cache.numcwdfail3: 0
vfs.cache.numcwdfail4: 4
vfs.cache.numcwdfound: 213802
vfs.cache.numfullpathcalls: 0
vfs.cache.numfullpathfail1: 0
vfs.cache.numfullpathfail2: 0
vfs.cache.numfullpathfail3: 0
vfs.cache.numfullpathfail4: 0
vfs.cache.numfullpathfound: 0

    Again, keep in mind that the namei cache is strictly throw-away, but
    entries can often be reconstituted later by the filesystem without I/O
    due to the VM Page cache (and/or buffer cache depending on
    vfs.vmiodirenable).  So as with the buffer cache and inode cache,
    the number of entries can be limited without killing performance or
    scaleability.

earth:/home/dillon> vmstat -m | egrep 'Type|vfsc'
...
        Type  InUse MemUse HighUse  Limit Requests Limit Limit Size(s)
     vfscache 30567  2386K   2489K 85444K 27552485    0     0  64,128,256,256K

    This particular machine has 30567 component entries in the namei cache
    at the moment, eating around 2.3 MB of kernel memory.  That makes the
    namei cache quite efficient.

    Of course, there are many situations where the namei cache is 
    ineffective, such as on machines with insanely huge mail queues
    or older usenet news systems that used individual files for article
    storage, or a squid cache that uses individual files.  The ultimate
    solution is to back the name cache with a filesystem that uses hashed
    or sorted/indexed directories - one of the few disadvantages that remain
    with UFS/FFS.  I've never found that to be a show stopper, though.

						-Matt


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200104121757.f3CHvJd20639>