From owner-freebsd-current Thu Oct 23 13:12:20 1997 Return-Path: Received: (from root@localhost) by hub.freebsd.org (8.8.7/8.8.7) id NAA24467 for current-outgoing; Thu, 23 Oct 1997 13:12:20 -0700 (PDT) (envelope-from owner-freebsd-current) Received: from pat.idi.ntnu.no (0@pat.idi.ntnu.no [129.241.103.5]) by hub.freebsd.org (8.8.7/8.8.7) with ESMTP id NAA24461 for ; Thu, 23 Oct 1997 13:12:16 -0700 (PDT) (envelope-from Tor.Egge@idi.ntnu.no) Received: from idt.unit.no (tegge@presis.idi.ntnu.no [129.241.111.173]) by pat.idi.ntnu.no (8.8.6/8.8.6) with ESMTP id WAA00277; Thu, 23 Oct 1997 22:12:07 +0200 (MET DST) Message-Id: <199710232012.WAA00277@pat.idi.ntnu.no> To: Tor.Egge@idi.ntnu.no Cc: roberto@keltia.freenix.fr, current@FreeBSD.ORG Subject: Re: nullfs & current UPDATE! In-Reply-To: Your message of "Wed, 22 Oct 1997 18:15:13 +0200" References: <199710221615.SAA17560@pat.idi.ntnu.no> X-Mailer: Mew version 1.70 on Emacs 19.34.1 Mime-Version: 1.0 Content-Type: Text/Plain; charset=us-ascii Content-Transfer-Encoding: 7bit Date: Thu, 23 Oct 1997 20:12:07 +0000 From: Tor Egge Sender: owner-freebsd-current@FreeBSD.ORG X-Loop: FreeBSD.org Precedence: bulk I wrote: > An unconditional call to vrecycle (with ap->a_vp as first argument) in > the end of null_inactive (after VOP_UNLOCK) might be an alternate > solution with less side effects. That should cause an immediate vrele > of the underlying vnode where VOP_INACTIVE is called if usecount > reaches zero. I'm currently using the following patch, which seems to work. Index: sys/miscfs/nullfs/null_vnops.c =================================================================== RCS file: /home/ncvs/src/sys/miscfs/nullfs/null_vnops.c,v retrieving revision 1.25 diff -c -r1.25 null_vnops.c *** null_vnops.c 1997/10/21 21:01:34 1.25 --- null_vnops.c 1997/10/23 18:58:30 *************** *** 184,189 **** --- 184,191 ---- #include #include #include + #include + #include static int null_bug_bypass = 0; /* for debugging: enables bypass printf'ing */ SYSCTL_INT(_debug, OID_AUTO, nullfs_bug_bypass, CTLFLAG_RW, *************** *** 200,205 **** --- 202,209 ---- static int null_setattr __P((struct vop_setattr_args *ap)); static int null_strategy __P((struct vop_strategy_args *ap)); static int null_unlock __P((struct vop_unlock_args *ap)); + static int null_rename __P((struct vop_rename_args *ap)); + static int null_remove __P((struct vop_remove_args *ap)); /* * This is the 10-Apr-92 bypass routine. *************** *** 533,548 **** struct proc *a_p; } */ *ap; { - struct vnode *vp = ap->a_vp; - struct null_node *xp = VTONULL(vp); - struct vnode *lowervp = xp->null_lowervp; /* * Do nothing (and _don't_ bypass). * Wait to vrele lowervp until reclaim, * so that until then our null_node is in the * cache and reusable. - * We still have to tell the lower layer the vnode - * is now inactive though. * * NEEDSWORK: Someday, consider inactive'ing * the lowervp and then trying to reactivate it --- 537,547 ---- *************** *** 550,557 **** * like they do in the name lookup cache code. * That's too much work for now. */ - VOP_INACTIVE(lowervp, ap->a_p); VOP_UNLOCK(ap->a_vp, 0, ap->a_p); return (0); } --- 549,556 ---- * like they do in the name lookup cache code. * That's too much work for now. */ VOP_UNLOCK(ap->a_vp, 0, ap->a_p); + vrecycle(ap->a_vp, (struct simplelock *) 0, ap->a_p); return (0); } *************** *** 580,585 **** --- 579,626 ---- } static int + null_rename(ap) + struct vop_rename_args /* { + struct vnodeop_desc *a_desc; + struct vnode *a_fdvp; + struct vnode *a_fvp; + struct componentname *a_fcnp; + struct vnode *a_tdvp; + struct vnode *a_tvp; + struct componentname *a_tcnp; + } */ *ap; + { + /* + * XXX: + * The rename system call calls vnode_pager_uncache on + * the upper vnode. Propagate this to lower layers. + */ + if (ap->a_tvp) + (void) vnode_pager_uncache(NULLVPTOLOWERVP(ap->a_tvp), + ap->a_tcnp->cn_proc); + return null_bypass((struct vop_generic_args *) ap); + } + + static int + null_remove(ap) + struct vop_remove_args /* { + struct vnodeop_desc *a_desc; + struct vnode *a_dvp; + struct vnode *a_vp; + struct componentname *a_cnp; + } */ *ap; + { + /* + * XXX: + * The unlink system call calls vnode_pager_uncache on + * the upper vnode. Propagate this to lower layers. + */ + (void) vnode_pager_uncache(NULLVPTOLOWERVP(ap->a_vp), + ap->a_cnp->cn_proc); + return null_bypass((struct vop_generic_args *) ap); + } + + static int null_print(ap) struct vop_print_args /* { struct vnode *a_vp; *************** *** 654,659 **** --- 695,702 ---- { &vop_lookup_desc, (vop_t *) null_lookup }, { &vop_print_desc, (vop_t *) null_print }, { &vop_reclaim_desc, (vop_t *) null_reclaim }, + { &vop_remove_desc, (vop_t *) null_remove }, + { &vop_rename_desc, (vop_t *) null_rename }, { &vop_setattr_desc, (vop_t *) null_setattr }, { &vop_strategy_desc, (vop_t *) null_strategy }, { &vop_unlock_desc, (vop_t *) null_unlock }, - Tor Egge