Date: Sat, 01 Mar 2008 11:33:37 +0200 From: Andriy Gapon <avg@icyb.net.ua> To: freebsd-arch@freebsd.org Subject: multiple filesystems sharing/clobbering device vnode Message-ID: <47C922F1.2050307@icyb.net.ua>
next in thread | raw e-mail | index | archive | help
First, a little demonstration suggested by Bruce Evance: [I hope you will continue reading after reboot] 1. mount_cd9660 /dev/acd0 /mnt1 2. mount -r /dev/acd0 /mnt2 # -r is important 3. ls -l /mnt1 The issue can be laconically described as follows: 1. We do not disallow multiple RO mounts of the same device (which could be done either on purpose or by an accident). 2. All popular (on-disk) filesystems use/clobber bufobj of device's vnode, even for RO mounts; some (ufs) do that even if mount fails. 3. There are no considerations for such a shared access, all filesystems act as if it is an exclusive owner of the vnode / its bufobj. Small snippet of code that speaks for itself (the most interesting lines are marked with XXX at the beginning): int g_vfs_open(struct vnode *vp, struct g_consumer **cpp, const char *fsname, int wr) { struct g_geom *gp; struct g_provider *pp; struct g_consumer *cp; struct bufobj *bo; int vfslocked; int error; g_topology_assert(); *cpp = NULL; pp = g_dev_getprovider(vp->v_rdev); if (pp == NULL) return (ENOENT); gp = g_new_geomf(&g_vfs_class, "%s.%s", fsname, pp->name); cp = g_new_consumer(gp); g_attach(cp, pp); error = g_access(cp, 1, wr, 1); if (error) { g_wither_geom(gp, ENXIO); return (error); } vfslocked = VFS_LOCK_GIANT(vp->v_mount); vnode_create_vobject(vp, pp->mediasize, curthread); VFS_UNLOCK_GIANT(vfslocked); *cpp = cp; XXX bo = &vp->v_bufobj; XXX bo->bo_ops = g_vfs_bufops; XXX bo->bo_private = cp; XXX bo->bo_bsize = pp->sectorsize; gp->softc = bo; return (error); } In addition to this, some filesystems (ufs) directly modify v_bufobj. I've been pondering this issue for over a month now, I have some ideas but they all are wanting in one aspect or other. I would like to hear ideas and opinions of the people on this list. P.S. for those who didn't actually run the test, here's a hand-copied excerpt from stack trace: g_io_request g_vfs_strategy ffs_geom_strategy cd9660_strategy VOP_STRATEGY_APV bufstrategy breadn bread cd9660_readdir -- Andriy Gapon
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?47C922F1.2050307>