Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 5 Aug 1996 11:59:06 -0700 (MST)
From:      Terry Lambert <terry@lambert.org>
To:        michaelh@cet.co.jp (Michael Hancock)
Cc:        dfr@render.com, terry@lambert.org, jkh@time.cdrom.com, tony@fit.qut.edu.au, freebsd-current@freebsd.org
Subject:   Re: NFS Diskless Dispare...
Message-ID:  <199608051859.LAA11723@phaeton.artisoft.com>
In-Reply-To: <Pine.SV4.3.93.960805102421.16654A-100000@parkplace.cet.co.jp> from "Michael Hancock" at Aug 5, 96 10:33:45 am

next in thread | previous in thread | raw e-mail | index | archive | help
> I think what he's is saying is that when the vnodes are in the global pool
> the chances of reusing a vnode that was used previously by a particular fs
> is less than having a per fs vnode pool. 

No, it's not.

> The problem with the per fs vnode pool is the management overhead.  When
> you need to start reusing vnodes you need to search through all the
> different fs pools to find a vnode. 
> 
> I don't know which is a better trade-off.

This isn't how per FS vnode pools should work.

When you want a vnode, you call the generic "getnewvnode()" from the
XXX_vget routine via VFS_VGET (sys/mount.h).

This function returns a vnode with an FS specific inode.

In reality, you never care to have a vnode without an FS specific inode,
since there is no way to access or write buffers hung off the critter
because of the way vclean works.


What I'm suggesting is that there needs to be both a VFS_VGET and
a VFS_VPUT (or VFS_VRELE).  With the additional per fs release
mechanism, each FS instance can allocate an inode pool at its
instantiation (or do it on a per instance basis, the current
method which makes inode allocation so slow...).

Consider UFS: the in core inode struct consists of a bunch of in core
data elements (which should probably be in their own structure) and
a "struct  dinode i_din" for the on disk inode.

You could modify this as:

struct inode {
	struct icinode	i_ic;		/* in core inode*/
	struct vnode	i_iv;		/* vnode for inode*/
	struct dinode	i_din;		/* on disk inode*/
};


Essentially, allocation of an inode would allocate a vnode.  There
would never be an inode without a vnode.


The VFS_VPUT would put the vnode into a pool maintained by the
FS per fs instance (the in core fs structure would need an
additional structure element to point to the maintenance data).

The FS itself would use generic maintenance routines shared by
all FS's... and capable of taking a structure size for i_ic and
i_din element size variations between FS types.  This would
maintain all common code in the common interface.


The use of the vget to associate naked vnodes with the FS's would
go away; in no case is a naked vnode ever useful, since using vnode
buffer elements requires an FS context.


In effect, the ihash would become a vnhash and LRU for use in
reclaiming vnode/inode pairs.  This would be much more efficient
than the current dual allocation sequence.


This would allow the discard of the vclean interface, and of the
lock used to ensure it operates (a lock which has to be reimplemented
and reimplemented correctly on a per FS basis in the XXX_LOCK and
XXX_UNLOCK FS specific routines).


The vnode locking could then be done in common code:


vn_lock( vp, flags, p)
struct vnode *vp;
int flags;
struct proc *p;
{
	/* actual lock*/
	if( ( st = ...) == SUCCESS) {
		if( ( st = VOP_LOCK( vp, flags, p)) != SUCCESS) {
			/* lock was vetoed, undo actual lock*/
			...
		}
	}
	return( st);
}


The point here is that the lock contention (if any) can be resolved
without ever hitting the FS itsef in the failure case.



The generic case of the per FS lock is now:


int
XXX_lock(ap)
	struct vop_lock_args /* {
		struct vnode *a_vp;
		int a_flags; 
		struct proc *a_p;
	} */ *ap; 
{
	return( SUCCESS);
}


This is much harder to screw up when writing a new FS, and makes for much
smaller intermediate layers.


For NFS and unions, there isn't an i_din... but they also require data
hung off the vnode, so the same allocation rules apply.  It's a win
either way, and has the side benefit of unmunging the vn.


I believe that John Heidemann's thesis had this in mind when it refers
to using an RPC layer to use remote file system layers as intermediates
in a local VFS stack.


					Terry Lambert
					terry@lambert.org
---
Any opinions in this posting are my own and not those of my present
or previous employers.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199608051859.LAA11723>