From owner-freebsd-fs Thu Aug 29 09:33:37 1996 Return-Path: owner-fs Received: (from root@localhost) by freefall.freebsd.org (8.7.5/8.7.3) id JAA17387 for fs-outgoing; Thu, 29 Aug 1996 09:33:37 -0700 (PDT) Received: from phaeton.artisoft.com (phaeton.Artisoft.COM [198.17.250.211]) by freefall.freebsd.org (8.7.5/8.7.3) with SMTP id JAA17356; Thu, 29 Aug 1996 09:33:12 -0700 (PDT) Received: (from terry@localhost) by phaeton.artisoft.com (8.6.11/8.6.9) id JAA28774; Thu, 29 Aug 1996 09:16:20 -0700 From: Terry Lambert Message-Id: <199608291616.JAA28774@phaeton.artisoft.com> Subject: Re: vclean (was The VIVA file system) To: michaelh@cet.co.jp (Michael Hancock) Date: Thu, 29 Aug 1996 09:16:20 -0700 (MST) Cc: terry@lambert.org, eric@ms.uky.edu, freebsd-fs@FreeBSD.ORG, current@FreeBSD.ORG In-Reply-To: from "Michael Hancock" at Aug 29, 96 09:29:07 am X-Mailer: ELM [version 2.4 PL24] MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: owner-fs@FreeBSD.ORG X-Loop: FreeBSD.org Precedence: bulk > My interpretation of the vnode global pool design was that > vgone...->vclean wouldn't be called very often. It would only be called > by getnewvnode() when free vnodes were not available and for cases when > the vnode is deliberately revoked. > > Inactive() would mark both the vnode/inode inactive, but the data would be > left intact even when usecount went to zero so that all the important data > could be reactivated quickly. > > It's not working this way and it doesn't look trivial to get it work this > way. That's right. This is a natural consequence of moving the cache locality from its seperate location into its now unified location. Because you can not look up a buffer by device (and the device association would never be destroyed for a valid buffer in core, yet unreclaimed), the buffers on the vnodes in the pool lack the localitiy of the pre VM/cache unification code. The unification was such a tremendous win, that this was either hidden, or more likely, discounted. I'd like to see it revisited. > Regarding local per fs pools you still need some kind of global memory > management policy. It seems less complicated to manage a global pool, > than local per fs pools with opaque VOP calls. The amount of memeory is relatively small, and we are already running a modified zone allocator in any case. I don't see any conflict in the definition of a dditional zones. How do I reclaim packet reassembly buffer when I need another vnode? Right now, I don't. The conflict resoloution is intra-pool. Inter-pool conflicts are resolved either by static resource limits, or soft limits and/or watermarking. > Say you've got FFS, LFS, and NFS systems mounted and fs usage patterns > migrate between the fs's. You've got limited memory resources. How do > you determine which local pool to recover vnodes from? It'd be > inefficient to leave the pools wired until the fs was unmounted. Complex > LRU-like policies across multiple local per fs vnode pools also sound > pretty complicated to me. You keep a bias statistic, maintained on a per pool basis, for the reclaimation, and the reclaimer operates at a pool granularity, if in fact you allow such reclaimation to occur (see my paragraph preceeding for preferred approaches to a knowledgable reclaimer). > We also need to preserve the vnode revoking semantics for situations like > revoking the session terminals from the children of sesssion leaders. This is a tty subsystem function, and I do not agree with the current revocation semantics, mostly because I think tty devices should be instanced per controlling tty reference. This would allow the reference to be invalidated via flagging rather than using a seperate opv table. If you look for "struct fileops", you will see another bogosity that makes this this problematic. Resolve the struct fileops, and the carrying around of all that dead weight in the fd structs, and you have resolved the deadfs problem at the same time. The specfs stuff is going to go away with devfs, leaving UNIX domain sockets, pipes (which should be implemented as an opaque FS reference no exported as a mount point mapping to user space), and the VFS fileops (which should be the only ones, and therefore implicit, anyway). It's really not as complicated as you want to make it. 8-). Terry Lambert terry@lambert.org --- Any opinions in this posting are my own and not those of my present or previous employers.