Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 6 May 1997 10:17:21 -0700 (MST)
From:      Terry Lambert <terry@lambert.org>
To:        phk@dk.tfs.com (Poul-Henning Kamp)
Cc:        terry@lambert.org, current@FreeBSD.ORG
Subject:   Re: DOC: NAME CACHE USAGE
Message-ID:  <199705061717.KAA18718@phaeton.artisoft.com>
In-Reply-To: <2940.862904379@critter> from "Poul-Henning Kamp" at May 6, 97 09:39:39 am

next in thread | previous in thread | raw e-mail | index | archive | help
> This would waste storage for two reasons:
> 
>      1. The size of the namecache structure is 36 bytes, which means that 
>      we can tack another 28 bytes onto the malloc allocation for free.
>      This covers most of the names we ever see.

Well, I guess this assumes FreeBSD never moves to a SLAB allocator,
right?

The total 64 explains an incresed limit, but not a removal of the
limit.

In addition, going to:

struct  namecache {
        LIST_ENTRY(namecache) nc_hash;	/* hash chain */
        TAILQ_ENTRY(namecache) nc_lru;	/* LRU chain */
        struct  vnode *nc_dvp;		/* vnode of parent of name */
        u_long  nc_dvpid;		/* capability number of nc_dvp */
        struct  vnode *nc_vp;		/* vnode the name refers to */
        u_long  nc_vpid;		/* capability number of nc_vp */
        char    *nc_name;		/* segment name */
};

and only supporting up to 4 bytes for statistics (ignore the useless
serial number filed -- you have the vnode pointer for that) fits you
in 32 bytes instead of 64.

This happens to cover *all* names, not just thos in the normal size or
the normal size plus 28 bytes.  And it wins for SLAB allocation at a
later date by making the structures uniform size.


>      2. Copying the name away means that the directory vnodes doesn't have
>      to have any pages cached to be useful.  Think of a heavily used 
>      directory "/usr/local/lib/this/weird/path" With your suggestion we
>      would have to have at least a page for each of the 5 directories
>      for the namecache to work.

The locality of reference model which requires the vnode to be available
can likewise require its pages to be available.  Local to a directory
is local to a directory, and it matters not who allocates the pages
which must be present for a cache hit: whether they are allocated to
the cache block or they are allocated to the vnode referenced by the
cache block is irrelevant.

The one issue here is the potential for needing to take a page fault
to get the data referenced by the pointer, and that can be handled
safely in this case using encapsulation similar to that in uiomove.

In addition, it localizes page changes to references.  You need only
invalidate cache entries for the directory block which changed, not
all cache entries for the entire directory, when a file is deleted
or renamed, or a compaction occurs on create.  These are all events
which care flagged by virtue of their transiting the VFS interface:
the invalidation events can be asserted there (where they currently
are).  This drastically increases the utility for a directory with
activity occurring on it ...ie: a directory likely to have cache
entries in the first place.


> >This is a document that I made prepatory to doing some work to move
> >the cache lookup and entry out of the FS specific code and into common
> >code, so that you wouldn't even have to make VOP calls if you got
> >cache hits.  This design offers the bonus of making the FS code itself
> >smaller for each FS, and by making the code more bullet proof by making
> >the usage entirely uniform across all FS's.
> 
> This is a nice idea, but not well thought out.
> 
> If you were to do that, you would rob the filesystems of the control
> they have today to expire cached data at this time, leaving filesystems
> like union, nfs and null no other option than disabling cacheing entirely.

? The cache data for the underlying FS must be entered in the consumer
FS; the vnodes are different, and there's no choice.  The cache entries
do not "bleed through".

The point is for each FS consumer to manage the cache, rather than each
FS managing the cache.  For the FS's you note, they qualify as FS
consumers.  There's no dichotomy there.


> PS: there are a number of errors in your table.  For instance 
> cache_purge is called by vgone(), which covers many more cases 
> than you document.  Please don't post wrong information.

This document was compiled prior to the BSD4.4-Lite2 integration.  I
also explicitly stated:

| Note1:	This diagram refers to the ufs/cache interaction only.  Other
| 	File Systems which are cache users are not described (two are
| 	known to be erroneous in certain cases).
| 
| Note2:	The cache_purge() and cache_purgevfs() calls on mount/unmount
| 	operations are not described in detail.  In general, mount
| 	point vnodes that are covered are purged with cache_purge(),
| 	and file systems that are unmounted are purged with
| 	cache_purgevfs().
| 
| Note3:	All vnodes allocated or recovered from the freelist by the
| 	getnewvnode() are purged as part of initialization.

Which I believe, between Note1 and Note3, specifically states that this
case is not covered by the table.  Feel free to update and repost the
table, but make sure you do not introcude cycles in so doing.


					Regards,
					Terry Lambert
					terry@lambert.org
---
Any opinions in this posting are my own and not those of my present
or previous employers.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199705061717.KAA18718>