Date: Sun, 4 Apr 2021 23:12:49 +0000 From: Rick Macklem <rmacklem@uoguelph.ca> To: "tuexen@freebsd.org" <tuexen@freebsd.org> Cc: "Scheffenegger, Richard" <Richard.Scheffenegger@netapp.com>, Youssef GHORBAL <youssef.ghorbal@pasteur.fr>, "freebsd-net@freebsd.org" <freebsd-net@freebsd.org> Subject: Re: NFS Mount Hangs Message-ID: <YQXPR0101MB09688645194907BBAA6E7C7ADD789@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM> In-Reply-To: <2B189169-C0C9-4DE6-A01A-BE916F10BABA@freebsd.org> References: <C643BB9C-6B61-4DAC-8CF9-CE04EA7292D0@tildenparkcapital.com> <3750001D-3F1C-4D9A-A9D9-98BCA6CA65A4@tildenparkcapital.com> <33693DE3-7FF8-4FAB-9A75-75576B88A566@tildenparkcapital.com> <D67AF317-D238-4EC0-8C7F-22D54AD5144C@pasteur.fr> <YQXPR0101MB09684AB7BEFA911213604467DD669@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM> <C87066D3-BBF1-44E1-8398-E4EB6903B0F2@tildenparkcapital.com> <8E745920-1092-4312-B251-B49D11FE8028@pasteur.fr> <YQXPR0101MB0968C44C7C82A3EB64F384D0DD7B9@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM> <DEF8564D-0FE9-4C2C-9F3B-9BCDD423377C@freebsd.org> <YQXPR0101MB0968E0A17D8BCACFAF132225DD7A9@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM> <SN4PR0601MB3728E392BCA494EAD49605FE86789@SN4PR0601MB3728.namprd06.prod.outlook.com> <YQXPR0101MB09686B4F921B96DCAFEBF874DD789@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM> <765CE1CD-6AAB-4BEF-97C6-C2A1F0FF4AC5@freebsd.org> <YQXPR0101MB096876B44F33BAD8991B62C8DD789@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM>, <2B189169-C0C9-4DE6-A01A-BE916F10BABA@freebsd.org>
next in thread | previous in thread | raw e-mail | index | archive | help
tuexen@freebsd.org wrote:=0A= >> On 4. Apr 2021, at 22:28, Rick Macklem <rmacklem@uoguelph.ca> wrote:=0A= >>=0A= >> Oops, yes the packet capture is on freefall (forgot to mention that;-).= =0A= >> You should be able to:=0A= >> % fetch https://people.freebsd.org/~rmacklem/linuxtofreenfs.pcap=0A= >>=0A= >> Some useful packet #s are:=0A= >> 1949 - partitioning starts=0A= >> 2005 - partition healed=0A= >> 2060 - last RST=0A= >> 2067 - SYN -> gets going again=0A= >>=0A= >> This was taken at the Linux end. I have FreeBSD end too, although I=0A= >> don't think it tells you anything more.=0A= >Hi Rick,=0A= >=0A= >I would like to look at the FreeBSD side, too. =0A= fetch https://people.freebsd.org/~rmacklem/freetolinuxnfs.pcap=0A= =0A= >Do you also know, what=0A= >the state of the TCP connection was when the SYN / ACK / RST game was=0A= >going on?=0A= Just ESTABLISHED when the battle goes on.=0A= And it happens when the Send-Q is 0.=0A= (If the Send-Q is not empty, it finds its way to CLOSED.)=0A= =0A= If I wait long enough before healing the partition, it will=0A= go to FIN_WAIT_1, and then if I plug it back in, it does not=0A= do battle (at least not for long).=0A= =0A= Btw, I have one running now that seems stuck really good.=0A= It has been 20minutes since I plugged the net cable back in.=0A= (Unfortunately, I didn't have tcpdump running until after=0A= I saw it was not progressing after healing.=0A= --> There is one difference. There was a 6minute timeout=0A= enabled on the server krpc for "no activity", which is=0A= now disabled like it is for NFSv4.1 in freebsd-current.=0A= I had forgotten to re-disable it.=0A= So, when it does battle, it might have been the 6minute=0A= timeout, which would then do the soshutdown(..SHUT_WR)=0A= which kept it from getting "stuck" forever.=0A= -->This time I had to reboot the FreeBSD NFS server to=0A= get the Linux client unstuck, so this one looked a lot=0A= like what has been reported.=0A= The pcap for this one, started after the network was plugged=0A= back in and I noticed it was stuck for quite a while is here:=0A= fetch https://people.freebsd.org/~rmacklem/stuck.pcap=0A= =0A= In it, there is just a bunch of RST followed by SYN sent=0A= from client->FreeBSD and FreeBSD just keeps sending=0A= acks for the old segment back.=0A= --> It looks like FreeBSD did the "RST, ACK" after the=0A= krpc did a soshutdown(..SHUT_WR) on the socket,=0A= for the one you've been looking at.=0A= I'll test some more...=0A= =0A= >I would like to understand why the reestablishment of the connection=0A= >did not work...=0A= It is looking like it takes either a non-empty send-q or a=0A= soshutdown(..SHUT_WR) to get the FreeBSD socket=0A= out of established, where it just ignores the RSTs and=0A= SYN packets.=0A= =0A= Thanks for looking at it, rick=0A= =0A= Best regards=0A= Michael=0A= >=0A= > Have fun with it, rick=0A= >=0A= >=0A= > ________________________________________=0A= > From: tuexen@freebsd.org <tuexen@freebsd.org>=0A= > Sent: Sunday, April 4, 2021 12:41 PM=0A= > To: Rick Macklem=0A= > Cc: Scheffenegger, Richard; Youssef GHORBAL; freebsd-net@freebsd.org=0A= > Subject: Re: NFS Mount Hangs=0A= >=0A= > CAUTION: This email originated from outside of the University of Guelph. = Do not click links or open attachments unless you recognize the sender and = know the content is safe. If in doubt, forward suspicious emails to IThelp@= uoguelph.ca=0A= >=0A= >=0A= >> On 4. Apr 2021, at 17:27, Rick Macklem <rmacklem@uoguelph.ca> wrote:=0A= >>=0A= >> Well, I'm going to cheat and top post, since this is elated info. and=0A= >> not really part of the discussion...=0A= >>=0A= >> I've been testing network partitioning between a Linux client (5.2 kerne= l)=0A= >> and a FreeBSD-current NFS server. I have not gotten a solid hang, but=0A= >> I have had the Linux client doing "battle" with the FreeBSD server for= =0A= >> several minutes after un-partitioning the connection.=0A= >>=0A= >> The battle basically consists of the Linux client sending an RST, follow= ed=0A= >> by a SYN.=0A= >> The FreeBSD server ignores the RST and just replies with the same old ac= k.=0A= >> --> This varies from "just a SYN" that succeeds to 100+ cycles of the ab= ove=0A= >> over several minutes.=0A= >>=0A= >> I had thought that an RST was a "pretty heavy hammer", but FreeBSD seems= =0A= >> pretty good at ignoring it.=0A= >>=0A= >> A full packet capture of one of these is in /home/rmacklem/linuxtofreenf= s.pcap=0A= >> in case anyone wants to look at it.=0A= > On freefall? I would like to take a look at it...=0A= >=0A= > Best regards=0A= > Michael=0A= >>=0A= >> Here's a tcpdump snippet of the interesting part (see the *** comments):= =0A= >> 19:10:09.305775 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.ape= x-mesh: Flags [P.], seq 202585:202749, ack 212293, win 29128, options [nop,= nop,TS val 2073636037 ecr 2671204825], length 164: NFS reply xid 613153685 = reply ok 160 getattr NON 4 ids 0/33554432 sz 0=0A= >> 19:10:09.305850 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.ric= k.nfsd: Flags [.], ack 202749, win 501, options [nop,nop,TS val 2671204825 = ecr 2073636037], length 0=0A= >> *** Network is now partitioned...=0A= >>=0A= >> 19:10:09.407840 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.ric= k.nfsd: Flags [P.], seq 212293:212525, ack 202749, win 501, options [nop,no= p,TS val 2671204927 ecr 2073636037], length 232: NFS request xid 629930901 = 228 getattr fh 0,1/53=0A= >> 19:10:09.615779 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.ric= k.nfsd: Flags [P.], seq 212293:212525, ack 202749, win 501, options [nop,no= p,TS val 2671205135 ecr 2073636037], length 232: NFS request xid 629930901 = 228 getattr fh 0,1/53=0A= >> 19:10:09.823780 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.ric= k.nfsd: Flags [P.], seq 212293:212525, ack 202749, win 501, options [nop,no= p,TS val 2671205343 ecr 2073636037], length 232: NFS request xid 629930901 = 228 getattr fh 0,1/53=0A= >> *** Lots of lines snipped.=0A= >>=0A= >>=0A= >> 19:13:41.295783 ARP, Request who-has nfsv4-new3.home.rick tell nfsv4-lin= ux.home.rick, length 28=0A= >> 19:13:42.319767 ARP, Request who-has nfsv4-new3.home.rick tell nfsv4-lin= ux.home.rick, length 28=0A= >> 19:13:46.351966 ARP, Request who-has nfsv4-new3.home.rick tell nfsv4-lin= ux.home.rick, length 28=0A= >> 19:13:47.375790 ARP, Request who-has nfsv4-new3.home.rick tell nfsv4-lin= ux.home.rick, length 28=0A= >> 19:13:48.399786 ARP, Request who-has nfsv4-new3.home.rick tell nfsv4-lin= ux.home.rick, length 28=0A= >> *** Network is now unpartitioned...=0A= >>=0A= >> 19:13:48.399990 ARP, Reply nfsv4-new3.home.rick is-at d4:be:d9:07:81:72 = (oui Unknown), length 46=0A= >> 19:13:48.400002 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.ric= k.nfsd: Flags [S], seq 416692300, win 64240, options [mss 1460,sackOK,TS va= l 2671421871 ecr 0,nop,wscale 7], length 0=0A= >> 19:13:48.400185 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.ape= x-mesh: Flags [.], ack 212293, win 29127, options [nop,nop,TS val 207385513= 7 ecr 2671204825], length 0=0A= >> 19:13:48.400273 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.ric= k.nfsd: Flags [R], seq 964161458, win 0, length 0=0A= >> 19:13:49.423833 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.ric= k.nfsd: Flags [S], seq 416692300, win 64240, options [mss 1460,sackOK,TS va= l 2671424943 ecr 0,nop,wscale 7], length 0=0A= >> 19:13:49.424056 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.ape= x-mesh: Flags [.], ack 212293, win 29127, options [nop,nop,TS val 207385616= 1 ecr 2671204825], length 0=0A= >> *** This "battle" goes on for 223sec...=0A= >> I snipped out 13 cycles of this "Linux sends an RST, followed by SYN"= =0A= >> "FreeBSD replies with same old ACK". In another test run I saw this=0A= >> cycle continue non-stop for several minutes. This time, the Linux=0A= >> client paused for a while (see ARPs below).=0A= >>=0A= >> 19:13:49.424101 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.ric= k.nfsd: Flags [R], seq 964161458, win 0, length 0=0A= >> 19:13:53.455867 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.ric= k.nfsd: Flags [S], seq 416692300, win 64240, options [mss 1460,sackOK,TS va= l 2671428975 ecr 0,nop,wscale 7], length 0=0A= >> 19:13:53.455991 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.ape= x-mesh: Flags [.], ack 212293, win 29127, options [nop,nop,TS val 207386019= 3 ecr 2671204825], length 0=0A= >> *** Snipped a bunch of stuff out, mostly ARPs, plus one more RST.=0A= >>=0A= >> 19:16:57.775780 ARP, Request who-has nfsv4-new3.home.rick tell nfsv4-lin= ux.home.rick, length 28=0A= >> 19:16:57.775937 ARP, Reply nfsv4-new3.home.rick is-at d4:be:d9:07:81:72 = (oui Unknown), length 46=0A= >> 19:16:57.980240 ARP, Request who-has nfsv4-new3.home.rick tell 192.168.1= .254, length 46=0A= >> 19:16:58.555663 ARP, Request who-has nfsv4-new3.home.rick tell 192.168.1= .254, length 46=0A= >> 19:17:00.104701 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.ape= x-mesh: Flags [F.], seq 202749, ack 212293, win 29128, options [nop,nop,TS = val 2074046846 ecr 2671204825], length 0=0A= >> 19:17:15.664354 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.ape= x-mesh: Flags [F.], seq 202749, ack 212293, win 29128, options [nop,nop,TS = val 2074062406 ecr 2671204825], length 0=0A= >> 19:17:31.239246 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.ape= x-mesh: Flags [R.], seq 202750, ack 212293, win 0, options [nop,nop,TS val = 2074077981 ecr 2671204825], length 0=0A= >> *** FreeBSD finally acknowledges the RST 38sec after Linux sent the last= =0A= >> of 13 (100+ for another test run).=0A= >>=0A= >> 19:17:51.535979 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.ric= k.nfsd: Flags [S], seq 4247692373, win 64240, options [mss 1460,sackOK,TS v= al 2671667055 ecr 0,nop,wscale 7], length 0=0A= >> 19:17:51.536130 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.ape= x-mesh: Flags [S.], seq 661237469, ack 4247692374, win 65535, options [mss = 1460,nop,wscale 6,sackOK,TS val 2074098278 ecr 2671667055], length 0=0A= >> *** Now back in business...=0A= >>=0A= >> 19:17:51.536218 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.ric= k.nfsd: Flags [.], ack 1, win 502, options [nop,nop,TS val 2671667055 ecr 2= 074098278], length 0=0A= >> 19:17:51.536295 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.ric= k.nfsd: Flags [P.], seq 1:233, ack 1, win 502, options [nop,nop,TS val 2671= 667056 ecr 2074098278], length 232: NFS request xid 629930901 228 getattr f= h 0,1/53=0A= >> 19:17:51.536346 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.ric= k.nfsd: Flags [P.], seq 233:505, ack 1, win 502, options [nop,nop,TS val 26= 71667056 ecr 2074098278], length 272: NFS request xid 697039765 132 getattr= fh 0,1/53=0A= >> 19:17:51.536515 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.ape= x-mesh: Flags [.], ack 505, win 29128, options [nop,nop,TS val 2074098279 e= cr 2671667056], length 0=0A= >> 19:17:51.536553 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.ric= k.nfsd: Flags [P.], seq 505:641, ack 1, win 502, options [nop,nop,TS val 26= 71667056 ecr 2074098279], length 136: NFS request xid 730594197 132 getattr= fh 0,1/53=0A= >> 19:17:51.536562 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.ape= x-mesh: Flags [P.], seq 1:49, ack 505, win 29128, options [nop,nop,TS val 2= 074098279 ecr 2671667056], length 48: NFS reply xid 697039765 reply ok 44 g= etattr ERROR: unk 10063=0A= >>=0A= >> This error 10063 after the partition heals is also "bad news". It indica= tes the Session=0A= >> (which is supposed to maintain "exactly once" RPC semantics is broken). = I'll admit I=0A= >> suspect a Linux client bug, but will be investigating further.=0A= >>=0A= >> So, hopefully TCP conversant folk can confirm if the above is correct be= haviour=0A= >> or if the RST should be ack'd sooner?=0A= >>=0A= >> I could also see this becoming a "forever" TCP battle for other versions= of Linux client.=0A= >>=0A= >> rick=0A= >>=0A= >>=0A= >> ________________________________________=0A= >> From: Scheffenegger, Richard <Richard.Scheffenegger@netapp.com>=0A= >> Sent: Sunday, April 4, 2021 7:50 AM=0A= >> To: Rick Macklem; tuexen@freebsd.org=0A= >> Cc: Youssef GHORBAL; freebsd-net@freebsd.org=0A= >> Subject: Re: NFS Mount Hangs=0A= >>=0A= >> CAUTION: This email originated from outside of the University of Guelph.= Do not click links or open attachments unless you recognize the sender and= know the content is safe. If in doubt, forward suspicious emails to IThelp= @uoguelph.ca=0A= >>=0A= >>=0A= >> For what it=91s worth, suse found two bugs in the linux nfconntrack (sta= teful firewall), and pfifo-fast scheduler, which could conspire to make tcp= sessions hang forever.=0A= >>=0A= >> One is a missed updaten when the c=F6ient is not using the noresvport mo= int option, which makes tje firewall think rsts are illegal (and drop them)= ;=0A= >>=0A= >> The fast scheduler can run into an issue if only a single packet should = be forwarded (note that this is not the default scheduler, but often recomm= ended for perf, as it runs lockless and lower cpu cost that pfq (default). = If no other/additional packet pushes out that last packet of a flow, it can= become stuck forever...=0A= >>=0A= >> I can try getting the relevant bug info next week...=0A= >>=0A= >> ________________________________=0A= >> Von: owner-freebsd-net@freebsd.org <owner-freebsd-net@freebsd.org> im Au= ftrag von Rick Macklem <rmacklem@uoguelph.ca>=0A= >> Gesendet: Friday, April 2, 2021 11:31:01 PM=0A= >> An: tuexen@freebsd.org <tuexen@freebsd.org>=0A= >> Cc: Youssef GHORBAL <youssef.ghorbal@pasteur.fr>; freebsd-net@freebsd.or= g <freebsd-net@freebsd.org>=0A= >> Betreff: Re: NFS Mount Hangs=0A= >>=0A= >> NetApp Security WARNING: This is an external email. Do not click links o= r open attachments unless you recognize the sender and know the content is = safe.=0A= >>=0A= >>=0A= >>=0A= >>=0A= >> tuexen@freebsd.org wrote:=0A= >>>> On 2. Apr 2021, at 02:07, Rick Macklem <rmacklem@uoguelph.ca> wrote:= =0A= >>>>=0A= >>>> I hope you don't mind a top post...=0A= >>>> I've been testing network partitioning between the only Linux client= =0A= >>>> I have (5.2 kernel) and a FreeBSD server with the xprtdied.patch=0A= >>>> (does soshutdown(..SHUT_WR) when it knows the socket is broken)=0A= >>>> applied to it.=0A= >>>>=0A= >>>> I'm not enough of a TCP guy to know if this is useful, but here's what= =0A= >>>> I see...=0A= >>>>=0A= >>>> While partitioned:=0A= >>>> On the FreeBSD server end, the socket either goes to CLOSED during=0A= >>>> the network partition or stays ESTABLISHED.=0A= >>> If it goes to CLOSED you called shutdown(, SHUT_WR) and the peer also= =0A= >>> sent a FIN, but you never called close() on the socket.=0A= >>> If the socket stays in ESTABLISHED, there is no communication ongoing,= =0A= >>> I guess, and therefore the server does not even detect that the peer=0A= >>> is not reachable.=0A= >>>> On the Linux end, the socket seems to remain ESTABLISHED for a=0A= >>>> little while, and then disappears.=0A= >>> So how does Linux detect the peer is not reachable?=0A= >> Well, here's what I see in a packet capture in the Linux client once=0A= >> I partition it (just unplug the net cable):=0A= >> - lots of retransmits of the same segment (with ACK) for 54sec=0A= >> - then only ARP queries=0A= >>=0A= >> Once I plug the net cable back in:=0A= >> - ARP works=0A= >> - one more retransmit of the same segement=0A= >> - receives RST from FreeBSD=0A= >> ** So, is this now a "new" TCP connection, despite=0A= >> using the same port#.=0A= >> --> It matters for NFS, since "new connection"=0A= >> implies "must retry all outstanding RPCs".=0A= >> - sends SYN=0A= >> - receives SYN, ACK from FreeBSD=0A= >> --> connection starts working again=0A= >> Always uses same port#.=0A= >>=0A= >> On the FreeBSD server end:=0A= >> - receives the last retransmit of the segment (with ACK)=0A= >> - sends RST=0A= >> - receives SYN=0A= >> - sends SYN, ACK=0A= >>=0A= >> I thought that there was no RST in the capture I looked at=0A= >> yesterday, so I'm not sure if FreeBSD always sends an RST,=0A= >> but the Linux client behaviour was the same. (Sent a SYN, etc).=0A= >> The socket disappears from the Linux "netstat -a" and I=0A= >> suspect that happens after about 54sec, but I am not sure=0A= >> about the timing.=0A= >>=0A= >>>>=0A= >>>> After unpartitioning:=0A= >>>> On the FreeBSD server end, you get another socket showing up at=0A= >>>> the same port#=0A= >>>> Active Internet connections (including servers)=0A= >>>> Proto Recv-Q Send-Q Local Address Foreign Address (sta= te)=0A= >>>> tcp4 0 0 nfsv4-new3.nfsd nfsv4-linux.678 ESTA= BLISHED=0A= >>>> tcp4 0 0 nfsv4-new3.nfsd nfsv4-linux.678 CLOS= ED=0A= >>>>=0A= >>>> The Linux client shows the same connection ESTABLISHED.=0A= >> But disappears from "netstat -a" for a while during the partitioning.=0A= >>=0A= >>>> (The mount sometimes reports an error. I haven't looked at packet=0A= >>>> traces to see if it retries RPCs or why the errors occur.)=0A= >> I have now done so, as above.=0A= >>=0A= >>>> --> However I never get hangs.=0A= >>>> Sometimes it goes to SYN_SENT for a while and the FreeBSD server=0A= >>>> shows FIN_WAIT_1, but then both ends go to ESTABLISHED and the=0A= >>>> mount starts working again.=0A= >>>>=0A= >>>> The most obvious thing is that the Linux client always keeps using=0A= >>>> the same port#. (The FreeBSD client will use a different port# when=0A= >>>> it does a TCP reconnect after no response from the NFS server for=0A= >>>> a little while.)=0A= >>>>=0A= >>>> What do those TCP conversant think?=0A= >>> I guess you are you are never calling close() on the socket, for with= =0A= >>> the connection state is CLOSED.=0A= >> Ok, that makes sense. For this case the Linux client has not done a=0A= >> BindConnectionToSession to re-assign the back channel.=0A= >> I'll have to bug them about this. However, I'll bet they'll answer=0A= >> that I have to tell them the back channel needs re-assignment=0A= >> or something like that.=0A= >>=0A= >> I am pretty certain they are broken, in that the client needs to=0A= >> retry all outstanding RPCs.=0A= >>=0A= >> For others, here's the long winded version of this that I just=0A= >> put on the phabricator review:=0A= >> In the server side kernel RPC, the socket (struct socket *) is in a=0A= >> structure called SVCXPRT (normally pointed to by "xprt").=0A= >> These structures a ref counted and the soclose() is done=0A= >> when the ref. cnt goes to zero. My understanding is that=0A= >> "struct socket *" is free'd by soclose() so this cannot be done=0A= >> before the xprt ref. cnt goes to zero.=0A= >>=0A= >> For NFSv4.1/4.2 there is something called a back channel=0A= >> which means that a "xprt" is used for server->client RPCs,=0A= >> although the TCP connection is established by the client=0A= >> to the server.=0A= >> --> This back channel holds a ref cnt on "xprt" until the=0A= >>=0A= >> client re-assigns it to a different TCP connection=0A= >> via an operation called BindConnectionToSession=0A= >> and the Linux client is not doing this soon enough,=0A= >> it appears.=0A= >>=0A= >> So, the soclose() is delayed, which is why I think the=0A= >> TCP connection gets stuck in CLOSE_WAIT and that is=0A= >> why I've added the soshutdown(..SHUT_WR) calls,=0A= >> which can happen before the client gets around to=0A= >> re-assigning the back channel.=0A= >>=0A= >> Thanks for your help with this Michael, rick=0A= >>=0A= >> Best regards=0A= >> Michael=0A= >>>=0A= >>> rick=0A= >>> ps: I can capture packets while doing this, if anyone has a use=0A= >>> for them.=0A= >>>=0A= >>>=0A= >>>=0A= >>>=0A= >>>=0A= >>>=0A= >>> ________________________________________=0A= >>> From: owner-freebsd-net@freebsd.org <owner-freebsd-net@freebsd.org> on = behalf of Youssef GHORBAL <youssef.ghorbal@pasteur.fr>=0A= >>> Sent: Saturday, March 27, 2021 6:57 PM=0A= >>> To: Jason Breitman=0A= >>> Cc: Rick Macklem; freebsd-net@freebsd.org=0A= >>> Subject: Re: NFS Mount Hangs=0A= >>>=0A= >>> CAUTION: This email originated from outside of the University of Guelph= . Do not click links or open attachments unless you recognize the sender an= d know the content is safe. If in doubt, forward suspicious emails to IThel= p@uoguelph.ca=0A= >>>=0A= >>>=0A= >>>=0A= >>>=0A= >>> On 27 Mar 2021, at 13:20, Jason Breitman <jbreitman@tildenparkcapital.c= om<mailto:jbreitman@tildenparkcapital.com>> wrote:=0A= >>>=0A= >>> The issue happened again so we can say that disabling TSO and LRO on th= e NIC did not resolve this issue.=0A= >>> # ifconfig lagg0 -rxcsum -rxcsum6 -txcsum -txcsum6 -lro -tso -vlanhwtso= =0A= >>> # ifconfig lagg0=0A= >>> lagg0: flags=3D8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> met= ric 0 mtu 1500=0A= >>> options=3D8100b8<VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,VLA= N_HWFILTER>=0A= >>>=0A= >>> We can also say that the sysctl settings did not resolve this issue.=0A= >>>=0A= >>> # sysctl net.inet.tcp.fast_finwait2_recycle=3D1=0A= >>> net.inet.tcp.fast_finwait2_recycle: 0 -> 1=0A= >>>=0A= >>> # sysctl net.inet.tcp.finwait2_timeout=3D1000=0A= >>> net.inet.tcp.finwait2_timeout: 60000 -> 1000=0A= >>>=0A= >>> I don=92t think those will do anything in your case since the FIN_WAIT2= are on the client side and those sysctls are for BSD.=0A= >>> By the way it seems that Linux recycles automatically TCP sessions in F= IN_WAIT2 after 60 seconds (sysctl net.ipv4.tcp_fin_timeout)=0A= >>>=0A= >>> tcp_fin_timeout (integer; default: 60; since Linux 2.2)=0A= >>> This specifies how many seconds to wait for a final FIN=0A= >>> packet before the socket is forcibly closed. This is=0A= >>> strictly a violation of the TCP specification, but=0A= >>> required to prevent denial-of-service attacks. In Linux=0A= >>> 2.2, the default value was 180.=0A= >>>=0A= >>> So I don=92t get why it stucks in the FIN_WAIT2 state anyway.=0A= >>>=0A= >>> You really need to have a packet capture during the outage (client and = server side) so you=92ll get over the wire chat and start speculating from = there.=0A= >>> No need to capture the beginning of the outage for now. All you have to= do, is run a tcpdump for 10 minutes or so when you notice a client stuck.= =0A= >>>=0A= >>> * I have not rebooted the NFS Server nor have I restarted nfsd, but do = not believe that is required as these settings are at the TCP level and I w= ould expect new sessions to use the updated settings.=0A= >>>=0A= >>> The issue occurred after 5 days following a reboot of the client machin= es.=0A= >>> I ran the capture information again to make use of the situation.=0A= >>>=0A= >>> #!/bin/sh=0A= >>>=0A= >>> while true=0A= >>> do=0A= >>> /bin/date >> /tmp/nfs-hang.log=0A= >>> /bin/ps axHl | grep nfsd | grep -v grep >> /tmp/nfs-hang.log=0A= >>> /usr/bin/procstat -kk 2947 >> /tmp/nfs-hang.log=0A= >>> /usr/bin/procstat -kk 2944 >> /tmp/nfs-hang.log=0A= >>> /bin/sleep 60=0A= >>> done=0A= >>>=0A= >>>=0A= >>> On the NFS Server=0A= >>> Active Internet connections (including servers)=0A= >>> Proto Recv-Q Send-Q Local Address Foreign Address (stat= e)=0A= >>> tcp4 0 0 NFS.Server.IP.X.2049 NFS.Client.IP.X.48286 = CLOSE_WAIT=0A= >>>=0A= >>> On the NFS Client=0A= >>> tcp 0 0 NFS.Client.IP.X:48286 NFS.Server.IP.X:2049 = FIN_WAIT2=0A= >>>=0A= >>>=0A= >>>=0A= >>> You had also asked for the output below.=0A= >>>=0A= >>> # nfsstat -E -s=0A= >>> BackChannelCtBindConnToSes=0A= >>> 0 0=0A= >>>=0A= >>> # sysctl vfs.nfsd.request_space_throttle_count=0A= >>> vfs.nfsd.request_space_throttle_count: 0=0A= >>>=0A= >>> I see that you are testing a patch and I look forward to seeing the res= ults.=0A= >>>=0A= >>>=0A= >>> Jason Breitman=0A= >>>=0A= >>>=0A= >>> On Mar 21, 2021, at 6:21 PM, Rick Macklem <rmacklem@uoguelph.ca<mailto:= rmacklem@uoguelph.ca>> wrote:=0A= >>>=0A= >>> Youssef GHORBAL <youssef.ghorbal@pasteur.fr<mailto:youssef.ghorbal@past= eur.fr>> wrote:=0A= >>>> Hi Jason,=0A= >>>>=0A= >>>>> On 17 Mar 2021, at 18:17, Jason Breitman <jbreitman@tildenparkcapital= .com<mailto:jbreitman@tildenparkcapital.com>> wrote:=0A= >>>>>=0A= >>>>> Please review the details below and let me know if there is a setting= that I should apply to my FreeBSD NFS Server or if there is a bug fix that= I can apply to resolve my issue.=0A= >>>>> I shared this information with the linux-nfs mailing list and they be= lieve the issue is on the server side.=0A= >>>>>=0A= >>>>> Issue=0A= >>>>> NFSv4 mounts periodically hang on the NFS Client.=0A= >>>>>=0A= >>>>> During this time, it is possible to manually mount from another NFS S= erver on the NFS Client having issues.=0A= >>>>> Also, other NFS Clients are successfully mounting from the NFS Server= in question.=0A= >>>>> Rebooting the NFS Client appears to be the only solution.=0A= >>>>=0A= >>>> I had experienced a similar weird situation with periodically stuck Li= nux NFS clients >mounting Isilon NFS servers (Isilon is FreeBSD based but t= hey seem to have there >own nfsd)=0A= >>> Yes, my understanding is that Isilon uses a proprietary user space nfsd= and=0A= >>> not the kernel based RPC and nfsd in FreeBSD.=0A= >>>=0A= >>>> We=92ve had better luck and we did manage to have packet captures on b= oth sides >during the issue. The gist of it goes like follows:=0A= >>>>=0A= >>>> - Data flows correctly between SERVER and the CLIENT=0A= >>>> - At some point SERVER starts decreasing it's TCP Receive Window until= it reachs 0=0A= >>>> - The client (eager to send data) can only ack data sent by SERVER.=0A= >>>> - When SERVER was done sending data, the client starts sending TCP Win= dow >Probes hoping that the TCP Window opens again so he can flush its buff= ers.=0A= >>>> - SERVER responds with a TCP Zero Window to those probes.=0A= >>> Having the window size drop to zero is not necessarily incorrect.=0A= >>> If the server is overloaded (has a backlog of NFS requests), it can sto= p doing=0A= >>> soreceive() on the socket (so the socket rcv buffer can fill up and the= TCP window=0A= >>> closes). This results in "backpressure" to stop the NFS client from flo= oding the=0A= >>> NFS server with requests.=0A= >>> --> However, once the backlog is handled, the nfsd should start to sore= ceive()=0A= >>> again and this shouls cause the window to open back up.=0A= >>> --> Maybe this is broken in the socket/TCP code. I quickly got lost in= =0A= >>> tcp_output() when it decides what to do about the rcvwin.=0A= >>>=0A= >>>> - After 6 minutes (the NFS server default Idle timeout) SERVER raceful= ly closes the >TCP connection sending a FIN Packet (and still a TCP Window = 0)=0A= >>> This probably does not happen for Jason's case, since the 6minute timeo= ut=0A= >>> is disabled when the TCP connection is assigned as a backchannel (most = likely=0A= >>> the case for NFSv4.1).=0A= >>>=0A= >>>> - CLIENT ACK that FIN.=0A= >>>> - SERVER goes in FIN_WAIT_2 state=0A= >>>> - CLIENT closes its half part part of the socket and goes in LAST_ACK = state.=0A= >>>> - FIN is never sent by the client since there still data in its SendQ = and receiver TCP >Window is still 0. At this stage the client starts sendin= g TCP Window Probes again >and again hoping that the server opens its TCP W= indow so it can flush it's buffers >and terminate its side of the socket.= =0A= >>>> - SERVER keeps responding with a TCP Zero Window to those probes.=0A= >>>> =3D> The last two steps goes on and on for hours/days freezing the NFS= mount bound >to that TCP session.=0A= >>>>=0A= >>>> If we had a situation where CLIENT was responsible for closing the TCP= Window (and >initiating the TCP FIN first) and server wanting to send data= we=92ll end up in the same >state as you I think.=0A= >>>>=0A= >>>> We=92ve never had the root cause of why the SERVER decided to close th= e TCP >Window and no more acccept data, the fix on the Isilon part was to r= ecycle more >aggressively the FIN_WAIT_2 sockets (net.inet.tcp.fast_finwait= 2_recycle=3D1 & >net.inet.tcp.finwait2_timeout=3D5000). Once the socket rec= ycled and at the next >occurence of CLIENT TCP Window probe, SERVER sends a= RST, triggering the >teardown of the session on the client side, a new TCP= handchake, etc and traffic >flows again (NFS starts responding)=0A= >>>>=0A= >>>> To avoid rebooting the client (and before the aggressive FIN_WAIT_2 wa= s >implemented on the Isilon side) we=92ve added a check script on the clie= nt that detects >LAST_ACK sockets on the client and through iptables rule e= nforces a TCP RST, >Something like: -A OUTPUT -p tcp -d $nfs_server_addr --= sport $local_port -j REJECT >--reject-with tcp-reset (the script removes th= is iptables rule as soon as the LAST_ACK >disappears)=0A= >>>>=0A= >>>> The bottom line would be to have a packet capture during the outage (c= lient and/or >server side), it will show you at least the shape of the TCP = exchange when NFS is >stuck.=0A= >>> Interesting story and good work w.r.t. sluething, Youssef, thanks.=0A= >>>=0A= >>> I looked at Jason's log and it shows everything is ok w.r.t the nfsd th= reads.=0A= >>> (They're just waiting for RPC requests.)=0A= >>> However, I do now think I know why the soclose() does not happen.=0A= >>> When the TCP connection is assigned as a backchannel, that takes a refe= rence=0A= >>> cnt on the structure. This refcnt won't be released until the connectio= n is=0A= >>> replaced by a BindConnectiotoSession operation from the client. But tha= t won't=0A= >>> happen until the client creates a new TCP connection.=0A= >>> --> No refcnt release-->no refcnt of 0-->no soclose().=0A= >>>=0A= >>> I've created the attached patch (completely different from the previous= one)=0A= >>> that adds soshutdown(SHUT_WR) calls in the three places where the TCP= =0A= >>> connection is going away. This seems to get it past CLOSE_WAIT without = a=0A= >>> soclose().=0A= >>> --> I know you are not comfortable with patching your server, but I do = think=0A= >>> this change will get the socket shutdown to complete.=0A= >>>=0A= >>> There are a couple more things you can check on the server...=0A= >>> # nfsstat -E -s=0A= >>> --> Look for the count under "BindConnToSes".=0A= >>> --> If non-zero, backchannels have been assigned=0A= >>> # sysctl -a | fgrep request_space_throttle_count=0A= >>> --> If non-zero, the server has been overloaded at some point.=0A= >>>=0A= >>> I think the attached patch might work around the problem.=0A= >>> The code that should open up the receive window needs to be checked.=0A= >>> I am also looking at enabling the 6minute timeout when a backchannel is= =0A= >>> assigned.=0A= >>>=0A= >>> rick=0A= >>>=0A= >>> Youssef=0A= >>>=0A= >>> _______________________________________________=0A= >>> freebsd-net@freebsd.org<mailto:freebsd-net@freebsd.org> mailing list=0A= >>> https://urldefense.com/v3/__https://lists.freebsd.org/mailman/listinfo/= freebsd-net__;!!JFdNOqOXpB6UZW0!_c2MFNbir59GXudWPVdE5bNBm-qqjXeBuJ2UEmFv5OZ= ciLj4ObR_drJNv5yryaERfIbhKR2d$=0A= >>> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org<m= ailto:freebsd-net-unsubscribe@freebsd.org>"=0A= >>> <xprtdied.patch>=0A= >>>=0A= >>> <nfs-hang.log.gz>=0A= >>>=0A= >>> _______________________________________________=0A= >>> freebsd-net@freebsd.org mailing list=0A= >>> https://lists.freebsd.org/mailman/listinfo/freebsd-net=0A= >>> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"= =0A= >>> _______________________________________________=0A= >>> freebsd-net@freebsd.org mailing list=0A= >>> https://lists.freebsd.org/mailman/listinfo/freebsd-net=0A= >>> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"= =0A= >>=0A= >> _______________________________________________=0A= >> freebsd-net@freebsd.org mailing list=0A= >> https://lists.freebsd.org/mailman/listinfo/freebsd-net=0A= >> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"= =0A= >> _______________________________________________=0A= >> freebsd-net@freebsd.org mailing list=0A= >> https://lists.freebsd.org/mailman/listinfo/freebsd-net=0A= >> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"= =0A= >=0A= =0A=
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?YQXPR0101MB09688645194907BBAA6E7C7ADD789>