Date: Tue, 01 Jul 1997 21:32:38 +0200 From: Sebastian Lederer <lederer@bonn-online.com> To: Terry Lambert <terry@lambert.org> Cc: freebsd-hackers@FreeBSD.ORG Subject: NFS locking, was: Re: NFS V3 is it stable? Message-ID: <33B95B56.41C67EA6@bonn-online.com> References: <199707011709.KAA18598@phaeton.artisoft.com>
next in thread | previous in thread | raw e-mail | index | archive | help
Terry Lambert wrote: > See my other posting; but basically the big obstacles are: Let me see if I understand all issues correctly: > 1) The POSIX semantics make it difficult for rpc.lockd > to have only one file handle per file regardless of > the number of clients with the file open. This is >[...] So the rpc.lockd (on the server) would have to keep a list of all active locks on a file and only close the file when all locks are cleared. > > 2) The assertion of a lock can not immediately result in > a coelesce if the operation may be backed out. But >[...] This should only affect the client, since if the lock fails on the server, nothing happens there, only a nlm_denied rpc is sent back. Then the client has to deal with the mess, because he had already set the lock locally. >[...] > allow the client to recover lock state in the event of > a transient server failure (ie: the server is rebooted, > etc.). For lock recovery, the lockd on the client would also keep a list of all active locks, and, in case of a server crash, would be notified by the rpc.statd and reissue all lock requests. If a lock request can't be reissued, the lockd should send a SIGLOST signal to the involved processes. Correct ? > 4) So that server locking works on all file systems, the > lock list must be hung off the vnode instead of the > inode; one consequence of this is that it drastically And I thought that this was already the case... >[...] > Doug Rabson has the kernel patches for everything, minus the handle > conversion call, and minus the POSIX semantic override. There *IS* > a bug in the namei() code, which I was able to test everywhere but > the NFS client (I only have one FreeBSD box at this location). If > you are interested in helping locate this bug, I can send you a test > framework for kernel memory leak detection, and my test set for > the namei() buffers, specifically. Sure, go ahead. I don't have a SUN or something like that here, however, for testing with a "real" rpc.lockd. I do have two FreeBSD machines connected via ethernet, so I can do some non-local testing. As I have already pointed out, I would be willing to invest some time in implementing the rpc.lockd. The main problems (from my point of view) are: Details of the locking protocol: * How are blocking locks implemented? * On which side are the locks coalesced? On the client's or on the server's rpc.lockd ? * What is the cookie in the nlm_lockargs struct ? (probably used by the client to identify the result messages) * What is the file handle in the nlm_lock struct ? (seems to be device/inode/generation number) * What is the owner handle in the nlm_lock struct ? (IP Address of the client? process id ?) Converting the nfs file handle into an open file: This seems to me the most important point for the lockd implementation. Without this, I can't actually lock the file. Client side locking: The lock requests must somehow be communicated from the kernel to local lockd, which then forwards it to the server's lockd. If I know all these details, it should be possible for me to complete the rpc.lockd implementation. So, if anybody has any knowledge on these issues, please contact me. It would be greatly appreciated. And of course, if someone else also wants to work on this, you are welcome. It may still possible that we end up with an at least basically working nfs locking implementation :-) Best regards, Sebastian Lederer lederer@bonn-online.com
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?33B95B56.41C67EA6>