From owner-freebsd-stable@FreeBSD.ORG Thu May 21 12:29:33 2015 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A4C0696D for ; Thu, 21 May 2015 12:29:33 +0000 (UTC) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 535B61553 for ; Thu, 21 May 2015 12:29:32 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2CsBACmzl1V/95baINcg2ReBoMZv3WBTwqFLUoCgXwSAQEBAQEBAYEKhCIBAQEDAQEBASArIAsFFhgCAg0ZAikBCSYGCAIFBAEcBIgDCA2sZ6QcAQEBBwEBAQEBAQEbgSGKGYQzAQEFFwEzB4JogUUFly2EG4QfiymKDiOBZoIuIjEHgQU6gQEBAQE X-IronPort-AV: E=Sophos;i="5.13,468,1427774400"; d="scan'208";a="213700155" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-annu.net.uoguelph.ca with ESMTP; 21 May 2015 08:29:31 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 18FE8B3EB1; Thu, 21 May 2015 08:29:31 -0400 (EDT) Date: Thu, 21 May 2015 08:29:31 -0400 (EDT) From: Rick Macklem To: Mahmoud Al-Qudsi Cc: freebsd-stable@freebsd.org Message-ID: <1600389691.41938009.1432211371089.JavaMail.root@uoguelph.ca> In-Reply-To: Subject: Re: Status of NFS4.1 FS_RECLAIM in FreeBSD 10.1? MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [172.17.95.10] X-Mailer: Zimbra 7.2.6_GA_2926 (ZimbraWebClient - FF3.0 (Win)/7.2.6_GA_2926) X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 21 May 2015 12:29:33 -0000 Mahmoud Al-Qudsi wrote: > On May 20, 2015, at 8:57 PM, Rick Macklem > wrote: > > Only the global RECLAIM_COMPLETE is implemented. I'll be honest > > that > > I don't even really understand what the "single fs > > reclaim_complete" > > semantics are and, as such, it isn't implemented. >=20 > Thanks for verifying that. >=20 > > I think it is meant to be used when a file system is migrated from > > one server to another (transferring the locks to the new server) or > > something like that. > > Migration/replication isn't supported. Maybe someday if I figure > > out > > what the RFC expects the server to do for this case. >=20 > I wasn=E2=80=99t clear on if this was lock reclaiming or block reclaiming= . > Thanks. >=20 > >> I can mount and use NFSv3 shares just fine with ESXi from this > >> same > >> server, and > >> can mount the same shares as NFSv4 from other clients (e.g. OS X) > >> as > >> well. > >>=20 > > This is NFSv4.1 specific, so NFSv4.0 should work, I think. Or just > > use NFSv3. > >=20 > > rick >=20 Btw, here's a snippet from RFC-5661 (around page#567) that I think clarifies what the client should be doing on a mount. Whenever a client establishes a new client ID and before it does the first non-reclaim operation that obtains a lock, it MUST send a RECLAIM_COMPLETE with rca_one_fs set to FALSE, even if there are no locks to reclaim. If non-reclaim locking operations are done before the RECLAIM_COMPLETE, an NFS4ERR_GRACE error will be returned. It clearly states that rca_one_fs should be FALSE, which is what all the clients I have tested against does. rick > For some reason, ESXi doesn=E2=80=99t do ESXi 4.0, only v3 or v4.1. >=20 > I am using NFS v3 for now, but unless I=E2=80=99m mistaken, since FreeBSD > supports > neither =E2=80=9Cnohide=E2=80=9D nor =E2=80=9Ccrossmnt=E2=80=9D there is = no way for a single > export(/import) > to cross ZFS filesystem boundaries. >=20 > I am using ZFS snapshots to manage virtual machine images, each > machine > has its own ZFS filesystem so I can snapshot and rollback > individually. But > this means that under NFSv3 (so far as I can tell), each =E2=80=9Cfolder= =E2=80=9D > (ZFS fs) > must be mounted separately on the ESXi host. I can get around > exporting > them each individually with the -alldirs parameter, but client-side, > there does > not seem to be a way of traversing ZFS filesystem mounts without > explicitly > mounting each and every one - a maintenance nightmare if there ever > was one. >=20 > The only thing I can think of would be unions for the top-level > directory, but I=E2=80=99m > very, very leery of the the nullfs/unionfs modules as they=E2=80=99ve bee= n a > source of > system instability for us in the past (deadlocks, undetected lock > inversions, etc). > That and I really rather a maintenance nightmare than a hack. >=20 > Would you have any other suggestions? >=20 > Thanks, >=20 > Mahmoud >=20 > _______________________________________________ > freebsd-stable@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to > "freebsd-stable-unsubscribe@freebsd.org"