From owner-freebsd-current Mon Aug 5 12:13:08 1996 Return-Path: owner-current Received: (from root@localhost) by freefall.freebsd.org (8.7.5/8.7.3) id MAA12262 for current-outgoing; Mon, 5 Aug 1996 12:13:08 -0700 (PDT) Received: from phaeton.artisoft.com (phaeton.Artisoft.COM [198.17.250.211]) by freefall.freebsd.org (8.7.5/8.7.3) with SMTP id MAA12256 for ; Mon, 5 Aug 1996 12:13:06 -0700 (PDT) Received: (from terry@localhost) by phaeton.artisoft.com (8.6.11/8.6.9) id MAA11762; Mon, 5 Aug 1996 12:08:27 -0700 From: Terry Lambert Message-Id: <199608051908.MAA11762@phaeton.artisoft.com> Subject: Re: NFS Diskless Dispare... To: michaelh@cet.co.jp (Michael Hancock) Date: Mon, 5 Aug 1996 12:08:27 -0700 (MST) Cc: dfr@render.com, terry@lambert.org, jkh@time.cdrom.com, tony@fit.qut.edu.au, freebsd-current@freebsd.org In-Reply-To: from "Michael Hancock" at Aug 5, 96 07:42:19 pm X-Mailer: ELM [version 2.4 PL24] MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: owner-current@freebsd.org X-Loop: FreeBSD.org Precedence: bulk > > But surely, even if you find a vnode from the same filesystem, you still > > need to 'clean' it, i.e. invalidate buffers and re-associate with a > > different inode+dev/filehandle/whatever. I don't see what the gain for > > per-fs vnode pools is. I expect Terry will explain it to me now :-) > > I think the gain is the potential processor data caching of the vnode > object itself. Having a global vnode pool breaks this locality of > reference. But then again since the caching effects are very temporal and > the Intel cache is small it's hard to say how much of an effect the per fs > pools would have. It's not the Intel cache being saved, it's the buffer cache in the "valid buffers are in core but not reclaimable without a disk I/O because of the vnode being diassociated from the underlying inode" case. The locality of reference issue is a good point. That is why I was suggesting that a directory name cache entry be treated as a reference instance on the counting semaphore. Since the graphs are complexly connected (sparse linear graph for buffer cache vs. top-fill hierarchical traversal graph for the directory structure), there is need for a second cache; the ihash is bad for this because it is FS specific, and because it is a linear locality model, just like the buffer cache; the only difference is the hash deals with the sparseness. I think this fixes your temporal argument applied to the buffer cache (instead of the processor cache, which I was not considering). You want to avoid the reassociation penalty. Since the buffers are hung off the vnode, the ihash savings are minimal following a hit (which will only occur after a dissociation. In both cases, the good data in core has had its references invalidated, and can't be recovered without a disk I/O (hence my remarks about the ihash about four weeks ago). Terry Lambert terry@lambert.org --- Any opinions in this posting are my own and not those of my present or previous employers.