From owner-svn-src-all@freebsd.org Wed Sep 23 10:44:49 2020 Return-Path: Delivered-To: svn-src-all@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id D3C623F5724; Wed, 23 Sep 2020 10:44:49 +0000 (UTC) (envelope-from mjg@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4BxFDd5LvMz4bgl; Wed, 23 Sep 2020 10:44:49 +0000 (UTC) (envelope-from mjg@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 9B0D2F593; Wed, 23 Sep 2020 10:44:49 +0000 (UTC) (envelope-from mjg@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id 08NAinWq012049; Wed, 23 Sep 2020 10:44:49 GMT (envelope-from mjg@FreeBSD.org) Received: (from mjg@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id 08NAin7w012048; Wed, 23 Sep 2020 10:44:49 GMT (envelope-from mjg@FreeBSD.org) Message-Id: <202009231044.08NAin7w012048@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: mjg set sender to mjg@FreeBSD.org using -f From: Mateusz Guzik Date: Wed, 23 Sep 2020 10:44:49 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r366070 - head/sys/kern X-SVN-Group: head X-SVN-Commit-Author: mjg X-SVN-Commit-Paths: head/sys/kern X-SVN-Commit-Revision: 366070 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.33 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Sep 2020 10:44:49 -0000 Author: mjg Date: Wed Sep 23 10:44:49 2020 New Revision: 366070 URL: https://svnweb.freebsd.org/changeset/base/366070 Log: cache: reimplement purgevfs to iterate vnodes instead of the entire hash The entire cache scan was a leftover from the old implementation. It is incredibly wasteful in presence of several mount points and does not win much even for single ones. Modified: head/sys/kern/vfs_cache.c Modified: head/sys/kern/vfs_cache.c ============================================================================== --- head/sys/kern/vfs_cache.c Wed Sep 23 10:42:41 2020 (r366069) +++ head/sys/kern/vfs_cache.c Wed Sep 23 10:44:49 2020 (r366070) @@ -491,20 +491,6 @@ static int vn_fullpath_dir(struct vnode *vp, struct vn static MALLOC_DEFINE(M_VFSCACHE, "vfscache", "VFS name cache entries"); -static int cache_yield; -SYSCTL_INT(_vfs_cache, OID_AUTO, yield, CTLFLAG_RD, &cache_yield, 0, - "Number of times cache called yield"); - -static void __noinline -cache_maybe_yield(void) -{ - - if (should_yield()) { - cache_yield++; - kern_yield(PRI_USER); - } -} - static inline void cache_assert_vlp_locked(struct mtx *vlp) { @@ -1212,51 +1198,6 @@ cache_zap_locked_bucket(struct namecache *ncp, struct return (cache_zap_unlocked_bucket(ncp, cnp, dvp, dvlp, vlp, hash, blp)); } -static int -cache_zap_locked_bucket_kl(struct namecache *ncp, struct mtx *blp, - struct mtx **vlpp1, struct mtx **vlpp2) -{ - struct mtx *dvlp, *vlp; - - cache_assert_bucket_locked(ncp); - - dvlp = VP2VNODELOCK(ncp->nc_dvp); - vlp = NULL; - if (!(ncp->nc_flag & NCF_NEGATIVE)) - vlp = VP2VNODELOCK(ncp->nc_vp); - cache_sort_vnodes(&dvlp, &vlp); - - if (*vlpp1 == dvlp && *vlpp2 == vlp) { - cache_zap_locked(ncp); - cache_unlock_vnodes(dvlp, vlp); - *vlpp1 = NULL; - *vlpp2 = NULL; - return (0); - } - - if (*vlpp1 != NULL) - mtx_unlock(*vlpp1); - if (*vlpp2 != NULL) - mtx_unlock(*vlpp2); - *vlpp1 = NULL; - *vlpp2 = NULL; - - if (cache_trylock_vnodes(dvlp, vlp) == 0) { - cache_zap_locked(ncp); - cache_unlock_vnodes(dvlp, vlp); - return (0); - } - - mtx_unlock(blp); - *vlpp1 = dvlp; - *vlpp2 = vlp; - if (*vlpp1 != NULL) - mtx_lock(*vlpp1); - mtx_lock(*vlpp2); - mtx_lock(blp); - return (EAGAIN); -} - static __noinline int cache_remove_cnp(struct vnode *dvp, struct componentname *cnp) { @@ -2316,14 +2257,26 @@ retry: } } +/* + * Opportunistic check to see if there is anything to do. + */ +static bool +cache_has_entries(struct vnode *vp) +{ + + if (LIST_EMPTY(&vp->v_cache_src) && TAILQ_EMPTY(&vp->v_cache_dst) && + vp->v_cache_dd == NULL) + return (false); + return (true); +} + void cache_purge(struct vnode *vp) { struct mtx *vlp; SDT_PROBE1(vfs, namecache, purge, done, vp); - if (LIST_EMPTY(&vp->v_cache_src) && TAILQ_EMPTY(&vp->v_cache_dst) && - vp->v_cache_dd == NULL) + if (!cache_has_entries(vp)) return; vlp = VP2VNODELOCK(vp); mtx_lock(vlp); @@ -2418,49 +2371,25 @@ cache_rename(struct vnode *fdvp, struct vnode *fvp, st void cache_purgevfs(struct mount *mp, bool force) { - TAILQ_HEAD(, namecache) ncps; - struct mtx *vlp1, *vlp2; - struct mtx *blp; - struct nchashhead *bucket; - struct namecache *ncp, *nnp; - u_long i, j, n_nchash; - int error; + struct vnode *vp, *mvp; - /* Scan hash tables for applicable entries */ SDT_PROBE1(vfs, namecache, purgevfs, done, mp); if (!force && mp->mnt_nvnodelistsize <= ncpurgeminvnodes) return; - TAILQ_INIT(&ncps); - n_nchash = nchash + 1; - vlp1 = vlp2 = NULL; - for (i = 0; i < numbucketlocks; i++) { - blp = (struct mtx *)&bucketlocks[i]; - mtx_lock(blp); - for (j = i; j < n_nchash; j += numbucketlocks) { -retry: - bucket = &nchashtbl[j]; - CK_SLIST_FOREACH_SAFE(ncp, bucket, nc_hash, nnp) { - cache_assert_bucket_locked(ncp); - if (ncp->nc_dvp->v_mount != mp) - continue; - error = cache_zap_locked_bucket_kl(ncp, blp, - &vlp1, &vlp2); - if (error != 0) - goto retry; - TAILQ_INSERT_HEAD(&ncps, ncp, nc_dst); - } - } - mtx_unlock(blp); - if (vlp1 == NULL && vlp2 == NULL) - cache_maybe_yield(); - } - if (vlp1 != NULL) - mtx_unlock(vlp1); - if (vlp2 != NULL) - mtx_unlock(vlp2); - TAILQ_FOREACH_SAFE(ncp, &ncps, nc_dst, nnp) { - cache_free(ncp); + /* + * Somewhat wasteful iteration over all vnodes. Would be better to + * support filtering and avoid the interlock to begin with. + */ + MNT_VNODE_FOREACH_ALL(vp, mp, mvp) { + if (!cache_has_entries(vp)) { + VI_UNLOCK(vp); + continue; + } + vholdl(vp); + VI_UNLOCK(vp); + cache_purge(vp); + vdrop(vp); } }