Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 8 Sep 2012 19:13:48 +0300
From:      Konstantin Belousov <kostikbel@gmail.com>
To:        Gleb Kurtsou <gleb.kurtsou@gmail.com>
Cc:        pho@freebsd.org, fs@freebsd.org
Subject:   Re: Nullfs shared lookup
Message-ID:  <20120908161348.GG33100@deviant.kiev.zoral.com.ua>
In-Reply-To: <20120908045921.GA1419@reks>
References:  <20120905091854.GD33100@deviant.kiev.zoral.com.ua> <20120908045921.GA1419@reks>

next in thread | previous in thread | raw e-mail | index | archive | help

--pIqiiB6HxdjnOZqe
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Fri, Sep 07, 2012 at 09:59:21PM -0700, Gleb Kurtsou wrote:
> On (05/09/2012 12:18), Konstantin Belousov wrote:
> > I, together with Peter Holm, developed a patch to enable shared lookups
> > on nullfs mounts when lower filesystem allows the shared lookups. The l=
ack
> > of shared lookup support for nullfs is quite visible on any VFS-intensi=
ve
> > workloads which utilize path translations. In particular, it was a comp=
lain
> > on $dayjob which started me thinking about this issue.
> >=20
> > There are two problems which prevent direct translation of shared
> > lookup bit into nullfs upper mount bit:
> >=20
> > 1. When vfs_lookup() calls VOP_LOOKUP() for nullfs, which passes lookup
> > operation to lower fs, resulting vnode is often only shared-locked. Then
> > null_nodeget() cannot instantiate covering vnode for lower vnode, since
> > insmntque1() and null_hashins() require exclusive lock on the lower.
> >=20
> > The solution is straightforward, if null hash failed to find pre-existi=
ng
> > nullfs vnode for lower vnode, the lower vnode lock is upgraded.
> >=20
> > 2. (More serious). Nullfs reclaims its vnodes on deactivation. The cause
> > is due to nullfs inability to detect reclamation of the lower vnode.
> > Reclamation of a nullfs vnode at deactivation time prevents a reference
> > to the lower vnode to become stale.
> >=20
> > Unfortunately, this means that all lookups on nullfs need exclusive lock
> > to instantiate upper vnode, which is never cached.
> >=20
> > Solution which we propose is to add VFS notification to the upper
> > filesystem about reclamation of the vnode in the lower filesystem. Now,
> > vgone() calls new VFS op vfs_reclaim_lowervp() with an argument lowervp
> > which is reclaimed. It is possible to register several reclamation event
> > listeners, to correctly handle the case of several nullfs mounts over
> > the same directory.
> >=20
> > For the filesystem not having nullfs mounts over it, the overhead added=
 is
> > a single mount interlock lock/unlock in the vnode reclamation path.
> >=20
> > Benchmarks consisting of up 1K threads doing parallel stat(2) on the
> > same file demonstate almost constant execution time, not depending of
> > number of running threads. While without the patch, exec time between
> > single-threaded run and run with 1024 threads performing the same total
> > count of stat(2), differ in 6 times.
> >=20
> > Somewhat problematic detail, IMO, is that nullfs reclamation procedure
> > calls vput() on the lowervp vnode, temporary unlocking the vnode being
> > reclaimed. This seems to be fine for MPSAFE filesystems, but not-MPSAFE
> > code often put partially initialized vnode on some globally visible
> > list, and later can decide that half-constructed vnode is not needed.
> > If nullfs mount is created above such filesystem, then other threads
> > might catch such not properly initialized vnode. Instead of trying
> > to overcome this case, e.g. by recursing the lower vnode lock in
> > null_reclaim_lowervp(), I decided to rely on nearby extermination of
> > non-MPSAFE filesystems support.
> >=20
> > I think that unionfs can also benefit from this mechanism, but I did not
> > even looked at unionfs.
> >=20
> > Patch is available at
> > http://people.freebsd.org/~kib/misc/nullfs_shared_lookup.1.patch
> > It survived stress2 torturing.
> >=20
> > Comments ?
>=20
> I only had a glance look at the patch, sorry it I missed something
> obvious.  How do we achieve propagation of rename/rm/rmdir to upper
> level name cache?

We don't.

If you look at the nullfs vnode op table, you should note that nullfs
provides a bypass function for vop_lookup. To use the name cache,
filesytem shall set vop_lookup to vfs_cache_lookup, and implement
vop_cachedlookup. See for instance UFS.

The cache avoidance was the reason why working null_vptocnp() was a
high-priority item (together with the weird locking protocol for the
vop).

--pIqiiB6HxdjnOZqe
Content-Type: application/pgp-signature
Content-Disposition: inline

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (FreeBSD)

iEYEARECAAYFAlBLbrwACgkQC3+MBN1Mb4iUBQCg2NbQ8sAXz9ETnQlgRJSXrhgt
kxcAn274Vd6vBO92HnO+YFJM8Y8ryQcx
=4yUR
-----END PGP SIGNATURE-----

--pIqiiB6HxdjnOZqe--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20120908161348.GG33100>