Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 4 Apr 2021 20:47:16 +0000
From:      Rick Macklem <rmacklem@uoguelph.ca>
To:        "Rodney W. Grimes" <freebsd-rwg@gndrsh.dnsmgr.net>
Cc:        "Scheffenegger, Richard" <Richard.Scheffenegger@netapp.com>, "tuexen@freebsd.org" <tuexen@freebsd.org>, Youssef GHORBAL <youssef.ghorbal@pasteur.fr>, "freebsd-net@freebsd.org" <freebsd-net@freebsd.org>
Subject:   Re: NFS Mount Hangs
Message-ID:  <YQXPR0101MB0968DD2B3FBF4476093885F0DD789@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM>
In-Reply-To: <202104041727.134HRTbA097115@gndrsh.dnsmgr.net>
References:  <YQXPR0101MB09686B4F921B96DCAFEBF874DD789@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM>, <202104041727.134HRTbA097115@gndrsh.dnsmgr.net>

next in thread | previous in thread | raw e-mail | index | archive | help
Rodney W. Grimes wrote:=0A=
>And I'll follow the lead to top post, as I have been quietly following=0A=
>this thread, trying to only add when I think I have relevant input, and=0A=
>I think I do on a small point...=0A=
>=0A=
>Rick,=0A=
>        Your "unplugging" a cable to simulate network partitioning,=0A=
>in my experience this is a bad way to do that, as the host gets to=0A=
>see the link layer go down and knows it can not send. =0A=
I unplug the server end and normally capture packets at the=0A=
Linux client end.=0A=
=0A=
> I am actually=0A=
>puzzled that you see arp packets, but I guess those are getting=0A=
>picked off before the interface layer silently tosses them on=0A=
>the ground.   IIRC due to this loss of link layer you may be=0A=
>masking some things that would occur in other situations as often=0A=
>an error is returned to the application layer.  IE the ONLY packet=0A=
>your likely to see into an unplugged cable is "arp".=0A=
The FreeBSD server end, where I unplug, does not seem to notice=0A=
at the link level (thanks to the intel net driver).=0A=
I do not even see a "loss of carrier" type message when I do it.=0A=
=0A=
>I can suggest other means to partition, such as configuring a switch=0A=
>port in and out of the correct LAN/VLAN, a physical switch in the TX=0A=
>pair to open it, but leave RX pair intact so carrier is not lost.=0A=
My switch is just the nat gateway the phone company provides.=0A=
I can log into it with a web browser, but have only done so once=0A=
in 2.5years since I got it.=0A=
It is also doing very important stuff during the testing, like streaming=0A=
the Mandalorian.=0A=
=0A=
>Both of these simulate partitioning that is more realistic, AND does=0A=
>not have the side effect of allowing upper layers to eat the packets=0A=
>before bpf can grab them, or be told that partitioning has occured.=0A=
Well, if others feel that something like the above will be useful, I might =
try.=0A=
=0A=
>Another side effect of unplugging a cable is that a host should=0A=
>immediately invalidate all ARP entries on that interface... hence=0A=
>your getting into an arp who has situation that should not even=0A=
>start for 5 minutes in the other failure modes.=0A=
The Linux client keeps spitting out arp queries, so that gets fixed=0A=
almost instantly when the cable gets plugged back in.=0A=
=0A=
In general, the kernel RPC sees very little of what is going on.=0A=
Sometimes a EPIPE when it tries to use a socket after it has=0A=
closed down.=0A=
=0A=
rick=0A=
=0A=
Regards,=0A=
Rod=0A=
=0A=
> Well, I'm going to cheat and top post, since this is elated info. and=0A=
> not really part of the discussion...=0A=
>=0A=
> I've been testing network partitioning between a Linux client (5.2 kernel=
)=0A=
> and a FreeBSD-current NFS server. I have not gotten a solid hang, but=0A=
> I have had the Linux client doing "battle" with the FreeBSD server for=0A=
> several minutes after un-partitioning the connection.=0A=
>=0A=
> The battle basically consists of the Linux client sending an RST, followe=
d=0A=
> by a SYN.=0A=
> The FreeBSD server ignores the RST and just replies with the same old ack=
.=0A=
> --> This varies from "just a SYN" that succeeds to 100+ cycles of the abo=
ve=0A=
>        over several minutes.=0A=
>=0A=
> I had thought that an RST was a "pretty heavy hammer", but FreeBSD seems=
=0A=
> pretty good at ignoring it.=0A=
>=0A=
> A full packet capture of one of these is in /home/rmacklem/linuxtofreenfs=
.pcap=0A=
> in case anyone wants to look at it.=0A=
>=0A=
> Here's a tcpdump snippet of the interesting part (see the *** comments):=
=0A=
> 19:10:09.305775 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.apex=
-mesh: Flags [P.], seq 202585:202749, ack 212293, win 29128, options [nop,n=
op,TS val 2073636037 ecr 2671204825], length 164: NFS reply xid 613153685 r=
eply ok 160 getattr NON 4 ids 0/33554432 sz 0=0A=
> 19:10:09.305850 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick=
.nfsd: Flags [.], ack 202749, win 501, options [nop,nop,TS val 2671204825 e=
cr 2073636037], length 0=0A=
> *** Network is now partitioned...=0A=
>=0A=
> 19:10:09.407840 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick=
.nfsd: Flags [P.], seq 212293:212525, ack 202749, win 501, options [nop,nop=
,TS val 2671204927 ecr 2073636037], length 232: NFS request xid 629930901 2=
28 getattr fh 0,1/53=0A=
> 19:10:09.615779 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick=
.nfsd: Flags [P.], seq 212293:212525, ack 202749, win 501, options [nop,nop=
,TS val 2671205135 ecr 2073636037], length 232: NFS request xid 629930901 2=
28 getattr fh 0,1/53=0A=
> 19:10:09.823780 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick=
.nfsd: Flags [P.], seq 212293:212525, ack 202749, win 501, options [nop,nop=
,TS val 2671205343 ecr 2073636037], length 232: NFS request xid 629930901 2=
28 getattr fh 0,1/53=0A=
> *** Lots of lines snipped.=0A=
>=0A=
>=0A=
> 19:13:41.295783 ARP, Request who-has nfsv4-new3.home.rick tell nfsv4-linu=
x.home.rick, length 28=0A=
> 19:13:42.319767 ARP, Request who-has nfsv4-new3.home.rick tell nfsv4-linu=
x.home.rick, length 28=0A=
> 19:13:46.351966 ARP, Request who-has nfsv4-new3.home.rick tell nfsv4-linu=
x.home.rick, length 28=0A=
> 19:13:47.375790 ARP, Request who-has nfsv4-new3.home.rick tell nfsv4-linu=
x.home.rick, length 28=0A=
> 19:13:48.399786 ARP, Request who-has nfsv4-new3.home.rick tell nfsv4-linu=
x.home.rick, length 28=0A=
> *** Network is now unpartitioned...=0A=
>=0A=
> 19:13:48.399990 ARP, Reply nfsv4-new3.home.rick is-at d4:be:d9:07:81:72 (=
oui Unknown), length 46=0A=
> 19:13:48.400002 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick=
.nfsd: Flags [S], seq 416692300, win 64240, options [mss 1460,sackOK,TS val=
 2671421871 ecr 0,nop,wscale 7], length 0=0A=
> 19:13:48.400185 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.apex=
-mesh: Flags [.], ack 212293, win 29127, options [nop,nop,TS val 2073855137=
 ecr 2671204825], length 0=0A=
> 19:13:48.400273 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick=
.nfsd: Flags [R], seq 964161458, win 0, length 0=0A=
> 19:13:49.423833 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick=
.nfsd: Flags [S], seq 416692300, win 64240, options [mss 1460,sackOK,TS val=
 2671424943 ecr 0,nop,wscale 7], length 0=0A=
> 19:13:49.424056 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.apex=
-mesh: Flags [.], ack 212293, win 29127, options [nop,nop,TS val 2073856161=
 ecr 2671204825], length 0=0A=
> *** This "battle" goes on for 223sec...=0A=
>     I snipped out 13 cycles of this "Linux sends an RST, followed by SYN"=
=0A=
>     "FreeBSD replies with same old ACK". In another test run I saw this=
=0A=
>     cycle continue non-stop for several minutes. This time, the Linux=0A=
>     client paused for a while (see ARPs below).=0A=
>=0A=
> 19:13:49.424101 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick=
.nfsd: Flags [R], seq 964161458, win 0, length 0=0A=
> 19:13:53.455867 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick=
.nfsd: Flags [S], seq 416692300, win 64240, options [mss 1460,sackOK,TS val=
 2671428975 ecr 0,nop,wscale 7], length 0=0A=
> 19:13:53.455991 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.apex=
-mesh: Flags [.], ack 212293, win 29127, options [nop,nop,TS val 2073860193=
 ecr 2671204825], length 0=0A=
> *** Snipped a bunch of stuff out, mostly ARPs, plus one more RST.=0A=
>=0A=
> 19:16:57.775780 ARP, Request who-has nfsv4-new3.home.rick tell nfsv4-linu=
x.home.rick, length 28=0A=
> 19:16:57.775937 ARP, Reply nfsv4-new3.home.rick is-at d4:be:d9:07:81:72 (=
oui Unknown), length 46=0A=
> 19:16:57.980240 ARP, Request who-has nfsv4-new3.home.rick tell 192.168.1.=
254, length 46=0A=
> 19:16:58.555663 ARP, Request who-has nfsv4-new3.home.rick tell 192.168.1.=
254, length 46=0A=
> 19:17:00.104701 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.apex=
-mesh: Flags [F.], seq 202749, ack 212293, win 29128, options [nop,nop,TS v=
al 2074046846 ecr 2671204825], length 0=0A=
> 19:17:15.664354 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.apex=
-mesh: Flags [F.], seq 202749, ack 212293, win 29128, options [nop,nop,TS v=
al 2074062406 ecr 2671204825], length 0=0A=
> 19:17:31.239246 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.apex=
-mesh: Flags [R.], seq 202750, ack 212293, win 0, options [nop,nop,TS val 2=
074077981 ecr 2671204825], length 0=0A=
> *** FreeBSD finally acknowledges the RST 38sec after Linux sent the last=
=0A=
>     of 13 (100+ for another test run).=0A=
>=0A=
> 19:17:51.535979 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick=
.nfsd: Flags [S], seq 4247692373, win 64240, options [mss 1460,sackOK,TS va=
l 2671667055 ecr 0,nop,wscale 7], length 0=0A=
> 19:17:51.536130 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.apex=
-mesh: Flags [S.], seq 661237469, ack 4247692374, win 65535, options [mss 1=
460,nop,wscale 6,sackOK,TS val 2074098278 ecr 2671667055], length 0=0A=
> *** Now back in business...=0A=
>=0A=
> 19:17:51.536218 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick=
.nfsd: Flags [.], ack 1, win 502, options [nop,nop,TS val 2671667055 ecr 20=
74098278], length 0=0A=
> 19:17:51.536295 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick=
.nfsd: Flags [P.], seq 1:233, ack 1, win 502, options [nop,nop,TS val 26716=
67056 ecr 2074098278], length 232: NFS request xid 629930901 228 getattr fh=
 0,1/53=0A=
> 19:17:51.536346 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick=
.nfsd: Flags [P.], seq 233:505, ack 1, win 502, options [nop,nop,TS val 267=
1667056 ecr 2074098278], length 272: NFS request xid 697039765 132 getattr =
fh 0,1/53=0A=
> 19:17:51.536515 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.apex=
-mesh: Flags [.], ack 505, win 29128, options [nop,nop,TS val 2074098279 ec=
r 2671667056], length 0=0A=
> 19:17:51.536553 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick=
.nfsd: Flags [P.], seq 505:641, ack 1, win 502, options [nop,nop,TS val 267=
1667056 ecr 2074098279], length 136: NFS request xid 730594197 132 getattr =
fh 0,1/53=0A=
> 19:17:51.536562 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.apex=
-mesh: Flags [P.], seq 1:49, ack 505, win 29128, options [nop,nop,TS val 20=
74098279 ecr 2671667056], length 48: NFS reply xid 697039765 reply ok 44 ge=
tattr ERROR: unk 10063=0A=
>=0A=
> This error 10063 after the partition heals is also "bad news". It indicat=
es the Session=0A=
> (which is supposed to maintain "exactly once" RPC semantics is broken). I=
'll admit I=0A=
> suspect a Linux client bug, but will be investigating further.=0A=
>=0A=
> So, hopefully TCP conversant folk can confirm if the above is correct beh=
aviour=0A=
> or if the RST should be ack'd sooner?=0A=
>=0A=
> I could also see this becoming a "forever" TCP battle for other versions =
of Linux client.=0A=
>=0A=
> rick=0A=
>=0A=
>=0A=
> ________________________________________=0A=
> From: Scheffenegger, Richard <Richard.Scheffenegger@netapp.com>=0A=
> Sent: Sunday, April 4, 2021 7:50 AM=0A=
> To: Rick Macklem; tuexen@freebsd.org=0A=
> Cc: Youssef GHORBAL; freebsd-net@freebsd.org=0A=
> Subject: Re: NFS Mount Hangs=0A=
>=0A=
> CAUTION: This email originated from outside of the University of Guelph. =
Do not click links or open attachments unless you recognize the sender and =
know the content is safe. If in doubt, forward suspicious emails to IThelp@=
uoguelph.ca=0A=
>=0A=
>=0A=
> For what it?s worth, suse found two bugs in the linux nfconntrack (statef=
ul firewall), and pfifo-fast scheduler, which could conspire to make tcp se=
ssions hang forever.=0A=
>=0A=
> One is a missed updaten when the c?ient is not using the noresvport moint=
 option, which makes tje firewall think rsts are illegal (and drop them);=
=0A=
>=0A=
> The fast scheduler can run into an issue if only a single packet should b=
e forwarded (note that this is not the default scheduler, but often recomme=
nded for perf, as it runs lockless and lower cpu cost that pfq (default). I=
f no other/additional packet pushes out that last packet of a flow, it can =
become stuck forever...=0A=
>=0A=
> I can try getting the relevant bug info next week...=0A=
>=0A=
> ________________________________=0A=
> Von: owner-freebsd-net@freebsd.org <owner-freebsd-net@freebsd.org> im Auf=
trag von Rick Macklem <rmacklem@uoguelph.ca>=0A=
> Gesendet: Friday, April 2, 2021 11:31:01 PM=0A=
> An: tuexen@freebsd.org <tuexen@freebsd.org>=0A=
> Cc: Youssef GHORBAL <youssef.ghorbal@pasteur.fr>; freebsd-net@freebsd.org=
 <freebsd-net@freebsd.org>=0A=
> Betreff: Re: NFS Mount Hangs=0A=
>=0A=
> NetApp Security WARNING: This is an external email. Do not click links or=
 open attachments unless you recognize the sender and know the content is s=
afe.=0A=
>=0A=
>=0A=
>=0A=
>=0A=
> tuexen@freebsd.org wrote:=0A=
> >> On 2. Apr 2021, at 02:07, Rick Macklem <rmacklem@uoguelph.ca> wrote:=
=0A=
> >>=0A=
> >> I hope you don't mind a top post...=0A=
> >> I've been testing network partitioning between the only Linux client=
=0A=
> >> I have (5.2 kernel) and a FreeBSD server with the xprtdied.patch=0A=
> >> (does soshutdown(..SHUT_WR) when it knows the socket is broken)=0A=
> >> applied to it.=0A=
> >>=0A=
> >> I'm not enough of a TCP guy to know if this is useful, but here's what=
=0A=
> >> I see...=0A=
> >>=0A=
> >> While partitioned:=0A=
> >> On the FreeBSD server end, the socket either goes to CLOSED during=0A=
> >> the network partition or stays ESTABLISHED.=0A=
> >If it goes to CLOSED you called shutdown(, SHUT_WR) and the peer also=0A=
> >sent a FIN, but you never called close() on the socket.=0A=
> >If the socket stays in ESTABLISHED, there is no communication ongoing,=
=0A=
> >I guess, and therefore the server does not even detect that the peer=0A=
> >is not reachable.=0A=
> >> On the Linux end, the socket seems to remain ESTABLISHED for a=0A=
> >> little while, and then disappears.=0A=
> >So how does Linux detect the peer is not reachable?=0A=
> Well, here's what I see in a packet capture in the Linux client once=0A=
> I partition it (just unplug the net cable):=0A=
> - lots of retransmits of the same segment (with ACK) for 54sec=0A=
> - then only ARP queries=0A=
>=0A=
> Once I plug the net cable back in:=0A=
> - ARP works=0A=
> - one more retransmit of the same segement=0A=
> - receives RST from FreeBSD=0A=
> ** So, is this now a "new" TCP connection, despite=0A=
>     using the same port#.=0A=
>     --> It matters for NFS, since "new connection"=0A=
>            implies "must retry all outstanding RPCs".=0A=
> - sends SYN=0A=
> - receives SYN, ACK from FreeBSD=0A=
> --> connection starts working again=0A=
> Always uses same port#.=0A=
>=0A=
> On the FreeBSD server end:=0A=
> - receives the last retransmit of the segment (with ACK)=0A=
> - sends RST=0A=
> - receives SYN=0A=
> - sends SYN, ACK=0A=
>=0A=
> I thought that there was no RST in the capture I looked at=0A=
> yesterday, so I'm not sure if FreeBSD always sends an RST,=0A=
> but the Linux client behaviour was the same. (Sent a SYN, etc).=0A=
> The socket disappears from the Linux "netstat -a" and I=0A=
> suspect that happens after about 54sec, but I am not sure=0A=
> about the timing.=0A=
>=0A=
> >>=0A=
> >> After unpartitioning:=0A=
> >> On the FreeBSD server end, you get another socket showing up at=0A=
> >> the same port#=0A=
> >> Active Internet connections (including servers)=0A=
> >> Proto Recv-Q Send-Q Local Address          Foreign Address        (sta=
te)=0A=
> >> tcp4       0      0 nfsv4-new3.nfsd        nfsv4-linux.678        ESTA=
BLISHED=0A=
> >> tcp4       0      0 nfsv4-new3.nfsd        nfsv4-linux.678        CLOS=
ED=0A=
> >>=0A=
> >> The Linux client shows the same connection ESTABLISHED.=0A=
> But disappears from "netstat -a" for a while during the partitioning.=0A=
>=0A=
> >> (The mount sometimes reports an error. I haven't looked at packet=0A=
> >> traces to see if it retries RPCs or why the errors occur.)=0A=
> I have now done so, as above.=0A=
>=0A=
> >> --> However I never get hangs.=0A=
> >> Sometimes it goes to SYN_SENT for a while and the FreeBSD server=0A=
> >> shows FIN_WAIT_1, but then both ends go to ESTABLISHED and the=0A=
> >> mount starts working again.=0A=
> >>=0A=
> >> The most obvious thing is that the Linux client always keeps using=0A=
> >> the same port#. (The FreeBSD client will use a different port# when=0A=
> >> it does a TCP reconnect after no response from the NFS server for=0A=
> >> a little while.)=0A=
> >>=0A=
> >> What do those TCP conversant think?=0A=
> >I guess you are you are never calling close() on the socket, for with=0A=
> >the connection state is CLOSED.=0A=
> Ok, that makes sense. For this case the Linux client has not done a=0A=
> BindConnectionToSession to re-assign the back channel.=0A=
> I'll have to bug them about this. However, I'll bet they'll answer=0A=
> that I have to tell them the back channel needs re-assignment=0A=
> or something like that.=0A=
>=0A=
> I am pretty certain they are broken, in that the client needs to=0A=
> retry all outstanding RPCs.=0A=
>=0A=
> For others, here's the long winded version of this that I just=0A=
> put on the phabricator review:=0A=
>  In the server side kernel RPC, the socket (struct socket *) is in a=0A=
>   structure called SVCXPRT (normally pointed to by "xprt").=0A=
>   These structures a ref counted and the soclose() is done=0A=
>   when the ref. cnt goes to zero. My understanding is that=0A=
>   "struct socket *" is free'd by soclose() so this cannot be done=0A=
>   before the xprt ref. cnt goes to zero.=0A=
>=0A=
>   For NFSv4.1/4.2 there is something called a back channel=0A=
>   which means that a "xprt" is used for server->client RPCs,=0A=
>   although the TCP connection is established by the client=0A=
>   to the server.=0A=
>   --> This back channel holds a ref cnt on "xprt" until the=0A=
>=0A=
>      client re-assigns it to a different TCP connection=0A=
>      via an operation called BindConnectionToSession=0A=
>      and the Linux client is not doing this soon enough,=0A=
>     it appears.=0A=
>=0A=
>   So, the soclose() is delayed, which is why I think the=0A=
>   TCP connection gets stuck in CLOSE_WAIT and that is=0A=
>   why I've added the soshutdown(..SHUT_WR) calls,=0A=
>   which can happen before the client gets around to=0A=
>   re-assigning the back channel.=0A=
>=0A=
> Thanks for your help with this Michael, rick=0A=
>=0A=
> Best regards=0A=
> Michael=0A=
> >=0A=
> > rick=0A=
> > ps: I can capture packets while doing this, if anyone has a use=0A=
> >      for them.=0A=
> >=0A=
> >=0A=
> >=0A=
> >=0A=
> >=0A=
> >=0A=
> > ________________________________________=0A=
> > From: owner-freebsd-net@freebsd.org <owner-freebsd-net@freebsd.org> on =
behalf of Youssef  GHORBAL <youssef.ghorbal@pasteur.fr>=0A=
> > Sent: Saturday, March 27, 2021 6:57 PM=0A=
> > To: Jason Breitman=0A=
> > Cc: Rick Macklem; freebsd-net@freebsd.org=0A=
> > Subject: Re: NFS Mount Hangs=0A=
> >=0A=
> > CAUTION: This email originated from outside of the University of Guelph=
. Do not click links or open attachments unless you recognize the sender an=
d know the content is safe. If in doubt, forward suspicious emails to IThel=
p@uoguelph.ca=0A=
> >=0A=
> >=0A=
> >=0A=
> >=0A=
> > On 27 Mar 2021, at 13:20, Jason Breitman <jbreitman@tildenparkcapital.c=
om<mailto:jbreitman@tildenparkcapital.com>> wrote:=0A=
> >=0A=
> > The issue happened again so we can say that disabling TSO and LRO on th=
e NIC did not resolve this issue.=0A=
> > # ifconfig lagg0 -rxcsum -rxcsum6 -txcsum -txcsum6 -lro -tso -vlanhwtso=
=0A=
> > # ifconfig lagg0=0A=
> > lagg0: flags=3D8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> met=
ric 0 mtu 1500=0A=
> >        options=3D8100b8<VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,V=
LAN_HWFILTER>=0A=
> >=0A=
> > We can also say that the sysctl settings did not resolve this issue.=0A=
> >=0A=
> > # sysctl net.inet.tcp.fast_finwait2_recycle=3D1=0A=
> > net.inet.tcp.fast_finwait2_recycle: 0 -> 1=0A=
> >=0A=
> > # sysctl net.inet.tcp.finwait2_timeout=3D1000=0A=
> > net.inet.tcp.finwait2_timeout: 60000 -> 1000=0A=
> >=0A=
> > I don?t think those will do anything in your case since the FIN_WAIT2 a=
re on the client side and those sysctls are for BSD.=0A=
> > By the way it seems that Linux recycles automatically TCP sessions in F=
IN_WAIT2 after 60 seconds (sysctl net.ipv4.tcp_fin_timeout)=0A=
> >=0A=
> > tcp_fin_timeout (integer; default: 60; since Linux 2.2)=0A=
> >              This specifies how many seconds to wait for a final FIN=0A=
> >              packet before the socket is forcibly closed.  This is=0A=
> >              strictly a violation of the TCP specification, but=0A=
> >              required to prevent denial-of-service attacks.  In Linux=
=0A=
> >              2.2, the default value was 180.=0A=
> >=0A=
> > So I don?t get why it stucks in the FIN_WAIT2 state anyway.=0A=
> >=0A=
> > You really need to have a packet capture during the outage (client and =
server side) so you?ll get over the wire chat and start speculating from th=
ere.=0A=
> > No need to capture the beginning of the outage for now. All you have to=
 do, is run a tcpdump for 10 minutes or so when you notice a client stuck.=
=0A=
> >=0A=
> > * I have not rebooted the NFS Server nor have I restarted nfsd, but do =
not believe that is required as these settings are at the TCP level and I w=
ould expect new sessions to use the updated settings.=0A=
> >=0A=
> > The issue occurred after 5 days following a reboot of the client machin=
es.=0A=
> > I ran the capture information again to make use of the situation.=0A=
> >=0A=
> > #!/bin/sh=0A=
> >=0A=
> > while true=0A=
> > do=0A=
> >  /bin/date >> /tmp/nfs-hang.log=0A=
> >  /bin/ps axHl | grep nfsd | grep -v grep >> /tmp/nfs-hang.log=0A=
> >  /usr/bin/procstat -kk 2947 >> /tmp/nfs-hang.log=0A=
> >  /usr/bin/procstat -kk 2944 >> /tmp/nfs-hang.log=0A=
> >  /bin/sleep 60=0A=
> > done=0A=
> >=0A=
> >=0A=
> > On the NFS Server=0A=
> > Active Internet connections (including servers)=0A=
> > Proto Recv-Q Send-Q Local Address          Foreign Address        (stat=
e)=0A=
> > tcp4       0      0 NFS.Server.IP.X.2049      NFS.Client.IP.X.48286    =
 CLOSE_WAIT=0A=
> >=0A=
> > On the NFS Client=0A=
> > tcp        0      0 NFS.Client.IP.X:48286      NFS.Server.IP.X:2049    =
   FIN_WAIT2=0A=
> >=0A=
> >=0A=
> >=0A=
> > You had also asked for the output below.=0A=
> >=0A=
> > # nfsstat -E -s=0A=
> > BackChannelCtBindConnToSes=0A=
> >            0            0=0A=
> >=0A=
> > # sysctl vfs.nfsd.request_space_throttle_count=0A=
> > vfs.nfsd.request_space_throttle_count: 0=0A=
> >=0A=
> > I see that you are testing a patch and I look forward to seeing the res=
ults.=0A=
> >=0A=
> >=0A=
> > Jason Breitman=0A=
> >=0A=
> >=0A=
> > On Mar 21, 2021, at 6:21 PM, Rick Macklem <rmacklem@uoguelph.ca<mailto:=
rmacklem@uoguelph.ca>> wrote:=0A=
> >=0A=
> > Youssef GHORBAL <youssef.ghorbal@pasteur.fr<mailto:youssef.ghorbal@past=
eur.fr>> wrote:=0A=
> >> Hi Jason,=0A=
> >>=0A=
> >>> On 17 Mar 2021, at 18:17, Jason Breitman <jbreitman@tildenparkcapital=
.com<mailto:jbreitman@tildenparkcapital.com>> wrote:=0A=
> >>>=0A=
> >>> Please review the details below and let me know if there is a setting=
 that I should apply to my FreeBSD NFS Server or if there is a bug fix that=
 I can apply to resolve my issue.=0A=
> >>> I shared this information with the linux-nfs mailing list and they be=
lieve the issue is on the server side.=0A=
> >>>=0A=
> >>> Issue=0A=
> >>> NFSv4 mounts periodically hang on the NFS Client.=0A=
> >>>=0A=
> >>> During this time, it is possible to manually mount from another NFS S=
erver on the NFS Client having issues.=0A=
> >>> Also, other NFS Clients are successfully mounting from the NFS Server=
 in question.=0A=
> >>> Rebooting the NFS Client appears to be the only solution.=0A=
> >>=0A=
> >> I had experienced a similar weird situation with periodically stuck Li=
nux NFS clients >mounting Isilon NFS servers (Isilon is FreeBSD based but t=
hey seem to have there >own nfsd)=0A=
> > Yes, my understanding is that Isilon uses a proprietary user space nfsd=
 and=0A=
> > not the kernel based RPC and nfsd in FreeBSD.=0A=
> >=0A=
> >> We?ve had better luck and we did manage to have packet captures on bot=
h sides >during the issue. The gist of it goes like follows:=0A=
> >>=0A=
> >> - Data flows correctly between SERVER and the CLIENT=0A=
> >> - At some point SERVER starts decreasing it's TCP Receive Window until=
 it reachs 0=0A=
> >> - The client (eager to send data) can only ack data sent by SERVER.=0A=
> >> - When SERVER was done sending data, the client starts sending TCP Win=
dow >Probes hoping that the TCP Window opens again so he can flush its buff=
ers.=0A=
> >> - SERVER responds with a TCP Zero Window to those probes.=0A=
> > Having the window size drop to zero is not necessarily incorrect.=0A=
> > If the server is overloaded (has a backlog of NFS requests), it can sto=
p doing=0A=
> > soreceive() on the socket (so the socket rcv buffer can fill up and the=
 TCP window=0A=
> > closes). This results in "backpressure" to stop the NFS client from flo=
oding the=0A=
> > NFS server with requests.=0A=
> > --> However, once the backlog is handled, the nfsd should start to sore=
ceive()=0A=
> > again and this shouls cause the window to open back up.=0A=
> > --> Maybe this is broken in the socket/TCP code. I quickly got lost in=
=0A=
> > tcp_output() when it decides what to do about the rcvwin.=0A=
> >=0A=
> >> - After 6 minutes (the NFS server default Idle timeout) SERVER raceful=
ly closes the >TCP connection sending a FIN Packet (and still a TCP Window =
0)=0A=
> > This probably does not happen for Jason's case, since the 6minute timeo=
ut=0A=
> > is disabled when the TCP connection is assigned as a backchannel (most =
likely=0A=
> > the case for NFSv4.1).=0A=
> >=0A=
> >> - CLIENT ACK that FIN.=0A=
> >> - SERVER goes in FIN_WAIT_2 state=0A=
> >> - CLIENT closes its half part part of the socket and goes in LAST_ACK =
state.=0A=
> >> - FIN is never sent by the client since there still data in its SendQ =
and receiver TCP >Window is still 0. At this stage the client starts sendin=
g TCP Window Probes again >and again hoping that the server opens its TCP W=
indow so it can flush it's buffers >and terminate its side of the socket.=
=0A=
> >> - SERVER keeps responding with a TCP Zero Window to those probes.=0A=
> >> =3D> The last two steps goes on and on for hours/days freezing the NFS=
 mount bound >to that TCP session.=0A=
> >>=0A=
> >> If we had a situation where CLIENT was responsible for closing the TCP=
 Window (and >initiating the TCP FIN first) and server wanting to send data=
 we?ll end up in the same >state as you I think.=0A=
> >>=0A=
> >> We?ve never had the root cause of why the SERVER decided to close the =
TCP >Window and no more acccept data, the fix on the Isilon part was to rec=
ycle more >aggressively the FIN_WAIT_2 sockets (net.inet.tcp.fast_finwait2_=
recycle=3D1 & >net.inet.tcp.finwait2_timeout=3D5000). Once the socket recyc=
led and at the next >occurence of CLIENT TCP Window probe, SERVER sends a R=
ST, triggering the >teardown of the session on the client side, a new TCP h=
andchake, etc and traffic >flows again (NFS starts responding)=0A=
> >>=0A=
> >> To avoid rebooting the client (and before the aggressive FIN_WAIT_2 wa=
s >implemented on the Isilon side) we?ve added a check script on the client=
 that detects >LAST_ACK sockets on the client and through iptables rule enf=
orces a TCP RST, >Something like: -A OUTPUT -p tcp -d $nfs_server_addr --sp=
ort $local_port -j REJECT >--reject-with tcp-reset (the script removes this=
 iptables rule as soon as the LAST_ACK >disappears)=0A=
> >>=0A=
> >> The bottom line would be to have a packet capture during the outage (c=
lient and/or >server side), it will show you at least the shape of the TCP =
exchange when NFS is >stuck.=0A=
> > Interesting story and good work w.r.t. sluething, Youssef, thanks.=0A=
> >=0A=
> > I looked at Jason's log and it shows everything is ok w.r.t the nfsd th=
reads.=0A=
> > (They're just waiting for RPC requests.)=0A=
> > However, I do now think I know why the soclose() does not happen.=0A=
> > When the TCP connection is assigned as a backchannel, that takes a refe=
rence=0A=
> > cnt on the structure. This refcnt won't be released until the connectio=
n is=0A=
> > replaced by a BindConnectiotoSession operation from the client. But tha=
t won't=0A=
> > happen until the client creates a new TCP connection.=0A=
> > --> No refcnt release-->no refcnt of 0-->no soclose().=0A=
> >=0A=
> > I've created the attached patch (completely different from the previous=
 one)=0A=
> > that adds soshutdown(SHUT_WR) calls in the three places where the TCP=
=0A=
> > connection is going away. This seems to get it past CLOSE_WAIT without =
a=0A=
> > soclose().=0A=
> > --> I know you are not comfortable with patching your server, but I do =
think=0A=
> > this change will get the socket shutdown to complete.=0A=
> >=0A=
> > There are a couple more things you can check on the server...=0A=
> > # nfsstat -E -s=0A=
> > --> Look for the count under "BindConnToSes".=0A=
> > --> If non-zero, backchannels have been assigned=0A=
> > # sysctl -a | fgrep request_space_throttle_count=0A=
> > --> If non-zero, the server has been overloaded at some point.=0A=
> >=0A=
> > I think the attached patch might work around the problem.=0A=
> > The code that should open up the receive window needs to be checked.=0A=
> > I am also looking at enabling the 6minute timeout when a backchannel is=
=0A=
> > assigned.=0A=
> >=0A=
> > rick=0A=
> >=0A=
> > Youssef=0A=
> >=0A=
> > _______________________________________________=0A=
> > freebsd-net@freebsd.org<mailto:freebsd-net@freebsd.org> mailing list=0A=
> > https://urldefense.com/v3/__https://lists.freebsd.org/mailman/listinfo/=
freebsd-net__;!!JFdNOqOXpB6UZW0!_c2MFNbir59GXudWPVdE5bNBm-qqjXeBuJ2UEmFv5OZ=
ciLj4ObR_drJNv5yryaERfIbhKR2d$=0A=
> > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org<m=
ailto:freebsd-net-unsubscribe@freebsd.org>"=0A=
> > <xprtdied.patch>=0A=
> >=0A=
> > <nfs-hang.log.gz>=0A=
> >=0A=
> > _______________________________________________=0A=
> > freebsd-net@freebsd.org mailing list=0A=
> > https://lists.freebsd.org/mailman/listinfo/freebsd-net=0A=
> > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"=
=0A=
> > _______________________________________________=0A=
> > freebsd-net@freebsd.org mailing list=0A=
> > https://lists.freebsd.org/mailman/listinfo/freebsd-net=0A=
> > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"=
=0A=
>=0A=
> _______________________________________________=0A=
> freebsd-net@freebsd.org mailing list=0A=
> https://lists.freebsd.org/mailman/listinfo/freebsd-net=0A=
> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"=0A=
> _______________________________________________=0A=
> freebsd-net@freebsd.org mailing list=0A=
> https://lists.freebsd.org/mailman/listinfo/freebsd-net=0A=
> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"=0A=
>=0A=
>=0A=
=0A=
--=0A=
Rod Grimes                                                 rgrimes@freebsd.=
org=0A=
_______________________________________________=0A=
freebsd-net@freebsd.org mailing list=0A=
https://lists.freebsd.org/mailman/listinfo/freebsd-net=0A=
To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"=0A=
=0A=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?YQXPR0101MB0968DD2B3FBF4476093885F0DD789>