Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 29 Aug 1996 09:29:07 +0900 (JST)
From:      Michael Hancock <michaelh@cet.co.jp>
To:        Terry Lambert <terry@lambert.org>
Cc:        eric@ms.uky.edu, freebsd-fs@FreeBSD.ORG, current@FreeBSD.ORG
Subject:   Re: vclean (was The VIVA file system)
Message-ID:  <Pine.SV4.3.93.960829085831.4475G-100000@parkplace.cet.co.jp>
In-Reply-To: <199608281650.JAA26928@phaeton.artisoft.com>

next in thread | previous in thread | raw e-mail | index | archive | help

On Wed, 28 Aug 1996, Terry Lambert wrote:

> > > This was the point I was missing.  What is disassociating the inode and
> > > when is it happening?
> > 
> > Yikes!  I took a look below, but I didn't expect to see vgone() calls in
> > ufs_inactive(). 
> > 
> >         if (vp->v_usecount == 0 && ip->i_mode == 0)
> >                 vgone(vp);
> > 
> > I need to figure out what ip->i_mode == 0 means.
> 
> The file type is a non-zero value in the high bits of the mode word;
> it means that the inode does not refer to real data any more.
> 
> The vgone call is just part of the subsystem I think should be replaced
> wholesale; I'd like to see a per FS vrele() (back to locally managed
> pools) replace most of those calls.  The vgone() calls vgone1() calls
> vclean, and we're back in my hate-zone.

My interpretation of the vnode global pool design was that
vgone...->vclean wouldn't be called very often.  It would only be called
by getnewvnode() when free vnodes were not available and for cases when
the vnode is deliberately revoked.

Inactive() would mark both the vnode/inode inactive, but the data would be
left intact even when usecount went to zero so that all the important data
could be reactivated quickly.

It's not working this way and it doesn't look trivial to get it work this
way.

Regarding local per fs pools you still need some kind of global memory
management policy.  It seems less complicated to manage a global pool,
than local per fs pools with opaque VOP calls. 

Say you've got FFS, LFS, and NFS systems mounted and fs usage patterns
migrate between the fs's.  You've got limited memory resources.  How do
you determine which local pool to recover vnodes from?  It'd be
inefficient to leave the pools wired until the fs was unmounted. Complex
LRU-like policies across multiple local per fs vnode pools also sound
pretty complicated to me. 

We also need to preserve the vnode revoking semantics for situations like
revoking the session terminals from the children of sesssion leaders.

Regards,


Mike Hancock






Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.SV4.3.93.960829085831.4475G-100000>