From owner-svn-src-head@freebsd.org Sun Jan 19 17:47:05 2020 Return-Path: Delivered-To: svn-src-head@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 8963F1FFAA6; Sun, 19 Jan 2020 17:47:05 +0000 (UTC) (envelope-from mjg@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4812LK35Q4z4W7T; Sun, 19 Jan 2020 17:47:05 +0000 (UTC) (envelope-from mjg@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 655F6E8C5; Sun, 19 Jan 2020 17:47:05 +0000 (UTC) (envelope-from mjg@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id 00JHl5Cd030796; Sun, 19 Jan 2020 17:47:05 GMT (envelope-from mjg@FreeBSD.org) Received: (from mjg@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id 00JHl5c7030795; Sun, 19 Jan 2020 17:47:05 GMT (envelope-from mjg@FreeBSD.org) Message-Id: <202001191747.00JHl5c7030795@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: mjg set sender to mjg@FreeBSD.org using -f From: Mateusz Guzik Date: Sun, 19 Jan 2020 17:47:05 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r356884 - head/sys/kern X-SVN-Group: head X-SVN-Commit-Author: mjg X-SVN-Commit-Paths: head/sys/kern X-SVN-Commit-Revision: 356884 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-head@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: SVN commit messages for the src tree for head/-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 19 Jan 2020 17:47:05 -0000 Author: mjg Date: Sun Jan 19 17:47:04 2020 New Revision: 356884 URL: https://svnweb.freebsd.org/changeset/base/356884 Log: vfs: allow v_holdcnt to transition 0->1 without the interlock Since r356672 ("vfs: rework vnode list management") there is nothing to do apart from altering freevnodes count, but this much can be safely done based on the result of atomic_fetchadd. Reviewed by: kib Tested by: pho Differential Revision: https://reviews.freebsd.org/D23186 Modified: head/sys/kern/vfs_subr.c Modified: head/sys/kern/vfs_subr.c ============================================================================== --- head/sys/kern/vfs_subr.c Sun Jan 19 17:05:26 2020 (r356883) +++ head/sys/kern/vfs_subr.c Sun Jan 19 17:47:04 2020 (r356884) @@ -2826,38 +2826,26 @@ v_decr_devcount(struct vnode *vp) * see doomed vnodes. If inactive processing was delayed in * vput try to do it here. * - * Both holdcnt and usecount can be manipulated using atomics without holding - * any locks except in these cases which require the vnode interlock: - * holdcnt: 1->0 and 0->1 - * usecount: 0->1 - * - * usecount is permitted to transition 1->0 without the interlock because - * vnode is kept live by holdcnt. + * usecount is manipulated using atomics without holding any locks, + * except when transitioning 0->1 in which case the interlock is held. + + * holdcnt is manipulated using atomics without holding any locks, + * except when transitioning 1->0 in which case the interlock is held. */ -static enum vgetstate __always_inline -_vget_prep(struct vnode *vp, bool interlock) +enum vgetstate +vget_prep(struct vnode *vp) { enum vgetstate vs; if (refcount_acquire_if_not_zero(&vp->v_usecount)) { vs = VGET_USECOUNT; } else { - if (interlock) - vholdl(vp); - else - vhold(vp); + vhold(vp); vs = VGET_HOLDCNT; } return (vs); } -enum vgetstate -vget_prep(struct vnode *vp) -{ - - return (_vget_prep(vp, false)); -} - int vget(struct vnode *vp, int flags, struct thread *td) { @@ -2865,7 +2853,7 @@ vget(struct vnode *vp, int flags, struct thread *td) MPASS(td == curthread); - vs = _vget_prep(vp, (flags & LK_INTERLOCK) != 0); + vs = vget_prep(vp); return (vget_finish(vp, flags, vs)); } @@ -3234,50 +3222,30 @@ vunref(struct vnode *vp) vputx(vp, VPUTX_VUNREF); } -/* - * Increase the hold count and activate if this is the first reference. - */ -static void -vhold_activate(struct vnode *vp) +void +vhold(struct vnode *vp) { struct vdbatch *vd; + int old; - ASSERT_VI_LOCKED(vp, __func__); - VNASSERT(vp->v_holdcnt == 0, vp, - ("%s: wrong hold count", __func__)); - VNASSERT(vp->v_op != NULL, vp, - ("%s: vnode already reclaimed.", __func__)); + CTR2(KTR_VFS, "%s: vp %p", __func__, vp); + old = atomic_fetchadd_int(&vp->v_holdcnt, 1); + VNASSERT(old >= 0, vp, ("%s: wrong hold count %d", __func__, old)); + if (old != 0) + return; critical_enter(); vd = DPCPU_PTR(vd); vd->freevnodes--; critical_exit(); - refcount_acquire(&vp->v_holdcnt); } void -vhold(struct vnode *vp) -{ - - ASSERT_VI_UNLOCKED(vp, __func__); - CTR2(KTR_VFS, "%s: vp %p", __func__, vp); - if (refcount_acquire_if_not_zero(&vp->v_holdcnt)) - return; - VI_LOCK(vp); - vholdl(vp); - VI_UNLOCK(vp); -} - -void vholdl(struct vnode *vp) { ASSERT_VI_LOCKED(vp, __func__); CTR2(KTR_VFS, "%s: vp %p", __func__, vp); - if (vp->v_holdcnt > 0) { - refcount_acquire(&vp->v_holdcnt); - return; - } - vhold_activate(vp); + vhold(vp); } void @@ -3417,8 +3385,6 @@ vdrop_deactivate(struct vnode *vp) ("vdrop: returning doomed vnode")); VNASSERT(vp->v_op != NULL, vp, ("vdrop: vnode already reclaimed.")); - VNASSERT(vp->v_holdcnt == 0, vp, - ("vdrop: freeing when we shouldn't")); VNASSERT((vp->v_iflag & VI_OWEINACT) == 0, vp, ("vnode with VI_OWEINACT set")); VNASSERT((vp->v_iflag & VI_DEFINACT) == 0, vp, @@ -3426,9 +3392,18 @@ vdrop_deactivate(struct vnode *vp) if (vp->v_mflag & VMP_LAZYLIST) { mp = vp->v_mount; mtx_lock(&mp->mnt_listmtx); - vp->v_mflag &= ~VMP_LAZYLIST; - TAILQ_REMOVE(&mp->mnt_lazyvnodelist, vp, v_lazylist); - mp->mnt_lazyvnodelistsize--; + VNASSERT(vp->v_mflag & VMP_LAZYLIST, vp, ("lost VMP_LAZYLIST")); + /* + * Don't remove the vnode from the lazy list if another thread + * has increased the hold count. It may have re-enqueued the + * vnode to the lazy list and is now responsible for its + * removal. + */ + if (vp->v_holdcnt == 0) { + vp->v_mflag &= ~VMP_LAZYLIST; + TAILQ_REMOVE(&mp->mnt_lazyvnodelist, vp, v_lazylist); + mp->mnt_lazyvnodelistsize--; + } mtx_unlock(&mp->mnt_listmtx); } vdbatch_enqueue(vp);