Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 27 May 2022 23:56:32 +0000
From:      Rick Macklem <rmacklem@uoguelph.ca>
To:        Andreas Kempe <kempe@lysator.liu.se>
Cc:        "freebsd-fs@freebsd.org" <freebsd-fs@freebsd.org>
Subject:   Re: FreeBSD 12.3/13.1 NFS client hang
Message-ID:  <YQBPR0101MB9742BDE2175F07CF23A7CD5ADDD89@YQBPR0101MB9742.CANPRD01.PROD.OUTLOOK.COM>
In-Reply-To: <YpFM2bSMscG4ekc9@shipon.lysator.liu.se>
References:  <YpEwxdGCouUUFHiE@shipon.lysator.liu.se> <YQBPR0101MB9742280313FC17543132A61CDDD89@YQBPR0101MB9742.CANPRD01.PROD.OUTLOOK.COM> <YpFM2bSMscG4ekc9@shipon.lysator.liu.se>

next in thread | previous in thread | raw e-mail | index | archive | help
Andreas Kempe <kempe@lysator.liu.se> wrote:=0A=
> On Fri, May 27, 2022 at 08:59:57PM +0000, Rick Macklem wrote:=0A=
> > Andreas Kempe <kempe@lysator.liu.se> wrote:=0A=
> > > Hello everyone!=0A=
> > >=0A=
> > > I'm having issues with the NFS clients on FreeBSD 12.3 and 13.1=0A=
> > > systems hanging when using a CentOS 7 server.=0A=
> > First, make sure you are using hard mounts. "soft" or "intr" mounts won=
't=0A=
> > work and will mess up the session sooner or later. (A messed up session=
 could=0A=
> > result in no free slots on the session and that will wedge threads in=
=0A=
> > nfsv4_sequencelookup() as you describe.=0A=
> > (This is briefly described in the BUGS section of "man mount_nfs".)=0A=
> >=0A=
>=0A=
> I had totally missed that soft and interruptible mounts have these=0A=
> issues. I switched the FreeBSD-machines to soft and intr on purpose=0A=
> to be able to fix hung mounts without having to restart the machine on=0A=
> NFS hangs. Since they are shared machines, it is an inconvinience for=0A=
> other users if one user causes a hang.=0A=
Usually, a "umount -N <mnt_path>" should dismount a hung mount=0A=
point.  It can take a couple of minutes to complete.=0A=
=0A=
> Switching our test machine back to hard mounts did prevent recursive=0A=
> grep from immediately causing the slot type hang again.=0A=
>=0A=
> > Do a:=0A=
> > # nfsstat -m=0A=
> > on the clients and look for "hard".=0A=
> >=0A=
> > Next, is there anything logged on the console for the 13.1 client(s)?=
=0A=
> > (13.1 has some diagnostics for things like a server replying with the=
=0A=
> >  wrong session slot#.)=0A=
> >=0A=
>=0A=
> The one thing we have seen logged are messages along the lines of:=0A=
> kernel: newnfs: server 'mail' error: fileid changed. fsid 4240eca6003a052=
a:0: expected fileid 0x22, got 0x2. (BROKEN NFS SERVER OR MIDDLEWARE)=0A=
It means that the server returned a different fileid number for the same fi=
le, although it should never change.=0A=
There's a description in a comment in sys/fs/nfsclient/nfs_clport.c.=0A=
I doubt the broken middleware is anywhere any more. I never knew the=0A=
details, since the guy that told me about it was under NDA to the=0A=
company that sold it. It cached Getattr replies and would sometimes return=
=0A=
the wrong cached entry. I think it only worked for NFSv3, anyhow.=0A=
=0A=
However, it does indicate something is seriously wrong, probably on the ser=
ver end.=0A=
(If you can capture packets when it gets logged, we could look at them in w=
ireshark.)=0A=
--> I'm not sure if a soft mount could somehow cause this?=0A=
=0A=
The diagnostics I was referring to would be things like "Wrong session" or =
"freeing free slot".=0A=
It was these that identified the Amazon EFS bug I mention later.=0A=
=0A=
> > Also, maybe I'm old fashioned, but I find "ps axHl" useful, since it sh=
ows=0A=
> > where all the processes are sleeping.=0A=
> > And "procstat -kk" covers all of the locks.=0A=
> >=0A=
> =0A=
> I don't know if it is a matter of being old fashioned as much as one=0A=
> of taste. :) In future dumps, I can provide both ps axHl and procstat -kk=
.=0A=
Ok. Lets see how things go with hard mounts.=0A=
=0A=
> > > Below are procstat kstack $PID invocations showing where the processe=
s=0A=
> > > have hung. In the nfsv4_sequencelookup it seems hung waiting for=0A=
> > > nfsess_slots to have an available slot. In the second nfs_lock case,=
=0A=
> > > it seems the processes are stuck waiting on vnode locks.=0A=
> > >=0A=
> > > These issues seem to appear seemingly at random, but also if=0A=
> > > operations that open a lot of files or create a lot of file locks are=
=0A=
> > > used. An example that can often provoke a hang is performing a=0A=
> > > recursive grep through a large file hierarchy like the FreeBSD=0A=
> > > codebase.=0A=
> > >=0A=
> > > The NFS code is large and complicated so any advice is appriciated!=
=0A=
> > Yea. I'm the author and I don't know exactly what it all does;-)\=0A=
> >=0A=
> > > Cordially,=0A=
> > > Andreas Kempe=0A=
> > >=0A=
> >=0A=
> > [...]=0A=
> >=0A=
> > Not very useful unless you have all the processes and their locks to tr=
y and figure out what is holding=0A=
> > the vnode locks.=0A=
> >=0A=
> =0A=
> Yes, I sent this mostly in the hope that it might be something that=0A=
> someone has seen before. I understand that more verbose information is=0A=
> needed to track down the lock contention.=0A=
There is PR#260011. It is similar and he was also using soft mounts, althou=
gh he is now trying=0A=
hard mounts. Also, we now know that the Amazon EFS server has a serious=0A=
bug where it sometimes replies with the wrong slotid.=0A=
=0A=
> I'll switch our machines back to using hard mounts and try to get as=0A=
> much diagnostic information as possible when the next lockup happens.=0A=
>=0A=
> Do you have any good suggestions for tracking down the issue? I've=0A=
> been contemplating enabling WITNESS or building with debug information=0A=
> to be able to hook in the kernel debugger.=0A=
I don't think WITNESS or the kernel debugger will help.=0A=
Beyond what you get from "ps axHl", it has happened before the hang.=0A=
If you can reproduce it for a hard mount, you could capture packets via:=0A=
# tcpdump -s 0 -w out.pcap host <nfs-server>=0A=
Tcpdump is useless at decoding NFS, but wireshark can decode the out.pcap=
=0A=
quite nicely. I can look at the out.pcap or, if you do so, you start by loo=
king for=0A=
NFSv4 specific errors.=0A=
--> The client will usually log if it gets one of these. It will be an erro=
r # > 10000.=0A=
=0A=
Good luck with it, rick=0A=
=0A=
Thank you very much for your reply!=0A=
Cordially,=0A=
Andreas Kempe=0A=
=0A=
> rick=0A=
>=0A=
>=0A=
=0A=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?YQBPR0101MB9742BDE2175F07CF23A7CD5ADDD89>