From owner-freebsd-current Sat Aug 31 03:00:15 1996 Return-Path: owner-current Received: (from root@localhost) by freefall.freebsd.org (8.7.5/8.7.3) id DAA02550 for current-outgoing; Sat, 31 Aug 1996 03:00:15 -0700 (PDT) Received: from parkplace.cet.co.jp (parkplace.cet.co.jp [202.32.64.1]) by freefall.freebsd.org (8.7.5/8.7.3) with ESMTP id DAA02541; Sat, 31 Aug 1996 03:00:10 -0700 (PDT) Received: from localhost (michaelh@localhost) by parkplace.cet.co.jp (8.7.5/CET-v2.1) with SMTP id JAA10724; Sat, 31 Aug 1996 09:59:58 GMT Date: Sat, 31 Aug 1996 18:59:57 +0900 (JST) From: Michael Hancock To: Terry Lambert cc: eric@ms.uky.edu, freebsd-fs@freebsd.org, current@freebsd.org Subject: Re: vclean (was The VIVA file system) In-Reply-To: <199608291616.JAA28774@phaeton.artisoft.com> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-current@freebsd.org X-Loop: FreeBSD.org Precedence: bulk On Thu, 29 Aug 1996, Terry Lambert wrote: > The amount of memeory is relatively small, and we are already running > a modified zone allocator in any case. I don't see any conflict in > the definition of a dditional zones. How do I reclaim packet reassembly > buffer when I need another vnode? Right now, I don't. The conflict > resoloution is intra-pool. Inter-pool conflicts are resolved either > by static resource limits, or soft limits and/or watermarking. I think watermarking is a good model to program to. From the point of view of users you want it see it unless you run into a problem where you need to look at them. I'd like to see some kind of automatic setting of low/high watermarks based on the resources of the computer that can be overridden by the admin. > > > > Say you've got FFS, LFS, and NFS systems mounted and fs usage patterns > > migrate between the fs's. You've got limited memory resources. How do > > you determine which local pool to recover vnodes from? It'd be > > inefficient to leave the pools wired until the fs was unmounted. Complex > > LRU-like policies across multiple local per fs vnode pools also sound > > pretty complicated to me. > > You keep a bias statistic, maintained on a per pool basis, for the > reclaimation, and the reclaimer operates at a pool granularity, if > in fact you allow such reclaimation to occur (see my paragraph preceeding > for preferred approaches to a knowledgable reclaimer). I'd like to revisit this later. I'm not sure I'd want to see the ability to reclaim go away. > > We also need to preserve the vnode revoking semantics for situations like > > revoking the session terminals from the children of sesssion leaders. > > This is a tty subsystem function, and I do not agree with the current > revocation semantics, mostly because I think tty devices should be > instanced per controlling tty reference. This would allow the reference > to be invalidated via flagging rather than using a seperate opv table. > > If you look for "struct fileops", you will see another bogosity that > makes this this problematic. Resolve the struct fileops, and the > carrying around of all that dead weight in the fd structs, and you have > resolved the deadfs problem at the same time. The specfs stuff is going > to go away with devfs, leaving UNIX domain sockets, pipes (which should > be implemented as an opaque FS reference no exported as a mount point > mapping to user space), and the VFS fileops (which should be the only > ones, and therefore implicit, anyway). Hanging the deadfs ops on the vnode seemed like a cool idea even though it looks like a little extra baggage. I guess we can revisit all this again later after the lite2 merge. Regards, Mike Hancock