Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 2 Apr 2021 21:31:01 +0000
From:      Rick Macklem <rmacklem@uoguelph.ca>
To:        "tuexen@freebsd.org" <tuexen@freebsd.org>
Cc:        Youssef GHORBAL <youssef.ghorbal@pasteur.fr>, Jason Breitman <jbreitman@tildenparkcapital.com>, "freebsd-net@freebsd.org" <freebsd-net@freebsd.org>
Subject:   Re: NFS Mount Hangs
Message-ID:  <YQXPR0101MB0968E0A17D8BCACFAF132225DD7A9@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM>
In-Reply-To: <DEF8564D-0FE9-4C2C-9F3B-9BCDD423377C@freebsd.org>
References:  <C643BB9C-6B61-4DAC-8CF9-CE04EA7292D0@tildenparkcapital.com> <3750001D-3F1C-4D9A-A9D9-98BCA6CA65A4@tildenparkcapital.com> <33693DE3-7FF8-4FAB-9A75-75576B88A566@tildenparkcapital.com> <D67AF317-D238-4EC0-8C7F-22D54AD5144C@pasteur.fr> <YQXPR0101MB09684AB7BEFA911213604467DD669@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM> <C87066D3-BBF1-44E1-8398-E4EB6903B0F2@tildenparkcapital.com> <8E745920-1092-4312-B251-B49D11FE8028@pasteur.fr> <YQXPR0101MB0968C44C7C82A3EB64F384D0DD7B9@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM>, <DEF8564D-0FE9-4C2C-9F3B-9BCDD423377C@freebsd.org>

next in thread | previous in thread | raw e-mail | index | archive | help
tuexen@freebsd.org wrote:=0A=
>> On 2. Apr 2021, at 02:07, Rick Macklem <rmacklem@uoguelph.ca> wrote:=0A=
>>=0A=
>> I hope you don't mind a top post...=0A=
>> I've been testing network partitioning between the only Linux client=0A=
>> I have (5.2 kernel) and a FreeBSD server with the xprtdied.patch=0A=
>> (does soshutdown(..SHUT_WR) when it knows the socket is broken)=0A=
>> applied to it.=0A=
>>=0A=
>> I'm not enough of a TCP guy to know if this is useful, but here's what=
=0A=
>> I see...=0A=
>>=0A=
>> While partitioned:=0A=
>> On the FreeBSD server end, the socket either goes to CLOSED during=0A=
>> the network partition or stays ESTABLISHED.=0A=
>If it goes to CLOSED you called shutdown(, SHUT_WR) and the peer also=0A=
>sent a FIN, but you never called close() on the socket.=0A=
>If the socket stays in ESTABLISHED, there is no communication ongoing,=0A=
>I guess, and therefore the server does not even detect that the peer=0A=
>is not reachable.=0A=
>> On the Linux end, the socket seems to remain ESTABLISHED for a=0A=
>> little while, and then disappears.=0A=
>So how does Linux detect the peer is not reachable?=0A=
Well, here's what I see in a packet capture in the Linux client once=0A=
I partition it (just unplug the net cable):=0A=
- lots of retransmits of the same segment (with ACK) for 54sec=0A=
- then only ARP queries=0A=
=0A=
Once I plug the net cable back in:=0A=
- ARP works=0A=
- one more retransmit of the same segement=0A=
- receives RST from FreeBSD=0A=
** So, is this now a "new" TCP connection, despite=0A=
    using the same port#.=0A=
    --> It matters for NFS, since "new connection"=0A=
           implies "must retry all outstanding RPCs".=0A=
- sends SYN=0A=
- receives SYN, ACK from FreeBSD=0A=
--> connection starts working again=0A=
Always uses same port#.=0A=
=0A=
On the FreeBSD server end:=0A=
- receives the last retransmit of the segment (with ACK)=0A=
- sends RST=0A=
- receives SYN=0A=
- sends SYN, ACK=0A=
=0A=
I thought that there was no RST in the capture I looked at=0A=
yesterday, so I'm not sure if FreeBSD always sends an RST,=0A=
but the Linux client behaviour was the same. (Sent a SYN, etc).=0A=
The socket disappears from the Linux "netstat -a" and I=0A=
suspect that happens after about 54sec, but I am not sure=0A=
about the timing.=0A=
=0A=
>>=0A=
>> After unpartitioning:=0A=
>> On the FreeBSD server end, you get another socket showing up at=0A=
>> the same port#=0A=
>> Active Internet connections (including servers)=0A=
>> Proto Recv-Q Send-Q Local Address          Foreign Address        (state=
)=0A=
>> tcp4       0      0 nfsv4-new3.nfsd        nfsv4-linux.678        ESTABL=
ISHED=0A=
>> tcp4       0      0 nfsv4-new3.nfsd        nfsv4-linux.678        CLOSED=
=0A=
>>=0A=
>> The Linux client shows the same connection ESTABLISHED.=0A=
But disappears from "netstat -a" for a while during the partitioning.=0A=
=0A=
>> (The mount sometimes reports an error. I haven't looked at packet=0A=
>> traces to see if it retries RPCs or why the errors occur.)=0A=
I have now done so, as above.=0A=
=0A=
>> --> However I never get hangs.=0A=
>> Sometimes it goes to SYN_SENT for a while and the FreeBSD server=0A=
>> shows FIN_WAIT_1, but then both ends go to ESTABLISHED and the=0A=
>> mount starts working again.=0A=
>>=0A=
>> The most obvious thing is that the Linux client always keeps using=0A=
>> the same port#. (The FreeBSD client will use a different port# when=0A=
>> it does a TCP reconnect after no response from the NFS server for=0A=
>> a little while.)=0A=
>>=0A=
>> What do those TCP conversant think?=0A=
>I guess you are you are never calling close() on the socket, for with=0A=
>the connection state is CLOSED.=0A=
Ok, that makes sense. For this case the Linux client has not done a=0A=
BindConnectionToSession to re-assign the back channel.=0A=
I'll have to bug them about this. However, I'll bet they'll answer=0A=
that I have to tell them the back channel needs re-assignment=0A=
or something like that.=0A=
=0A=
I am pretty certain they are broken, in that the client needs to=0A=
retry all outstanding RPCs.=0A=
=0A=
For others, here's the long winded version of this that I just=0A=
put on the phabricator review:=0A=
 In the server side kernel RPC, the socket (struct socket *) is in a=0A=
  structure called SVCXPRT (normally pointed to by "xprt").=0A=
  These structures a ref counted and the soclose() is done=0A=
  when the ref. cnt goes to zero. My understanding is that=0A=
  "struct socket *" is free'd by soclose() so this cannot be done=0A=
  before the xprt ref. cnt goes to zero.=0A=
 =0A=
  For NFSv4.1/4.2 there is something called a back channel=0A=
  which means that a "xprt" is used for server->client RPCs,=0A=
  although the TCP connection is established by the client=0A=
  to the server.=0A=
  --> This back channel holds a ref cnt on "xprt" until the=0A=
 =0A=
     client re-assigns it to a different TCP connection=0A=
     via an operation called BindConnectionToSession=0A=
     and the Linux client is not doing this soon enough,=0A=
    it appears.=0A=
 =0A=
  So, the soclose() is delayed, which is why I think the=0A=
  TCP connection gets stuck in CLOSE_WAIT and that is=0A=
  why I've added the soshutdown(..SHUT_WR) calls,=0A=
  which can happen before the client gets around to=0A=
  re-assigning the back channel.=0A=
=0A=
Thanks for your help with this Michael, rick=0A=
=0A=
Best regards=0A=
Michael=0A=
>=0A=
> rick=0A=
> ps: I can capture packets while doing this, if anyone has a use=0A=
>      for them.=0A=
>=0A=
>=0A=
>=0A=
>=0A=
>=0A=
>=0A=
> ________________________________________=0A=
> From: owner-freebsd-net@freebsd.org <owner-freebsd-net@freebsd.org> on be=
half of Youssef  GHORBAL <youssef.ghorbal@pasteur.fr>=0A=
> Sent: Saturday, March 27, 2021 6:57 PM=0A=
> To: Jason Breitman=0A=
> Cc: Rick Macklem; freebsd-net@freebsd.org=0A=
> Subject: Re: NFS Mount Hangs=0A=
>=0A=
> CAUTION: This email originated from outside of the University of Guelph. =
Do not click links or open attachments unless you recognize the sender and =
know the content is safe. If in doubt, forward suspicious emails to IThelp@=
uoguelph.ca=0A=
>=0A=
>=0A=
>=0A=
>=0A=
> On 27 Mar 2021, at 13:20, Jason Breitman <jbreitman@tildenparkcapital.com=
<mailto:jbreitman@tildenparkcapital.com>> wrote:=0A=
>=0A=
> The issue happened again so we can say that disabling TSO and LRO on the =
NIC did not resolve this issue.=0A=
> # ifconfig lagg0 -rxcsum -rxcsum6 -txcsum -txcsum6 -lro -tso -vlanhwtso=
=0A=
> # ifconfig lagg0=0A=
> lagg0: flags=3D8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metri=
c 0 mtu 1500=0A=
>        options=3D8100b8<VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,VLA=
N_HWFILTER>=0A=
>=0A=
> We can also say that the sysctl settings did not resolve this issue.=0A=
>=0A=
> # sysctl net.inet.tcp.fast_finwait2_recycle=3D1=0A=
> net.inet.tcp.fast_finwait2_recycle: 0 -> 1=0A=
>=0A=
> # sysctl net.inet.tcp.finwait2_timeout=3D1000=0A=
> net.inet.tcp.finwait2_timeout: 60000 -> 1000=0A=
>=0A=
> I don=92t think those will do anything in your case since the FIN_WAIT2 a=
re on the client side and those sysctls are for BSD.=0A=
> By the way it seems that Linux recycles automatically TCP sessions in FIN=
_WAIT2 after 60 seconds (sysctl net.ipv4.tcp_fin_timeout)=0A=
>=0A=
> tcp_fin_timeout (integer; default: 60; since Linux 2.2)=0A=
>              This specifies how many seconds to wait for a final FIN=0A=
>              packet before the socket is forcibly closed.  This is=0A=
>              strictly a violation of the TCP specification, but=0A=
>              required to prevent denial-of-service attacks.  In Linux=0A=
>              2.2, the default value was 180.=0A=
>=0A=
> So I don=92t get why it stucks in the FIN_WAIT2 state anyway.=0A=
>=0A=
> You really need to have a packet capture during the outage (client and se=
rver side) so you=92ll get over the wire chat and start speculating from th=
ere.=0A=
> No need to capture the beginning of the outage for now. All you have to d=
o, is run a tcpdump for 10 minutes or so when you notice a client stuck.=0A=
>=0A=
> * I have not rebooted the NFS Server nor have I restarted nfsd, but do no=
t believe that is required as these settings are at the TCP level and I wou=
ld expect new sessions to use the updated settings.=0A=
>=0A=
> The issue occurred after 5 days following a reboot of the client machines=
.=0A=
> I ran the capture information again to make use of the situation.=0A=
>=0A=
> #!/bin/sh=0A=
>=0A=
> while true=0A=
> do=0A=
>  /bin/date >> /tmp/nfs-hang.log=0A=
>  /bin/ps axHl | grep nfsd | grep -v grep >> /tmp/nfs-hang.log=0A=
>  /usr/bin/procstat -kk 2947 >> /tmp/nfs-hang.log=0A=
>  /usr/bin/procstat -kk 2944 >> /tmp/nfs-hang.log=0A=
>  /bin/sleep 60=0A=
> done=0A=
>=0A=
>=0A=
> On the NFS Server=0A=
> Active Internet connections (including servers)=0A=
> Proto Recv-Q Send-Q Local Address          Foreign Address        (state)=
=0A=
> tcp4       0      0 NFS.Server.IP.X.2049      NFS.Client.IP.X.48286     C=
LOSE_WAIT=0A=
>=0A=
> On the NFS Client=0A=
> tcp        0      0 NFS.Client.IP.X:48286      NFS.Server.IP.X:2049      =
 FIN_WAIT2=0A=
>=0A=
>=0A=
>=0A=
> You had also asked for the output below.=0A=
>=0A=
> # nfsstat -E -s=0A=
> BackChannelCtBindConnToSes=0A=
>            0            0=0A=
>=0A=
> # sysctl vfs.nfsd.request_space_throttle_count=0A=
> vfs.nfsd.request_space_throttle_count: 0=0A=
>=0A=
> I see that you are testing a patch and I look forward to seeing the resul=
ts.=0A=
>=0A=
>=0A=
> Jason Breitman=0A=
>=0A=
>=0A=
> On Mar 21, 2021, at 6:21 PM, Rick Macklem <rmacklem@uoguelph.ca<mailto:rm=
acklem@uoguelph.ca>> wrote:=0A=
>=0A=
> Youssef GHORBAL <youssef.ghorbal@pasteur.fr<mailto:youssef.ghorbal@pasteu=
r.fr>> wrote:=0A=
>> Hi Jason,=0A=
>>=0A=
>>> On 17 Mar 2021, at 18:17, Jason Breitman <jbreitman@tildenparkcapital.c=
om<mailto:jbreitman@tildenparkcapital.com>> wrote:=0A=
>>>=0A=
>>> Please review the details below and let me know if there is a setting t=
hat I should apply to my FreeBSD NFS Server or if there is a bug fix that I=
 can apply to resolve my issue.=0A=
>>> I shared this information with the linux-nfs mailing list and they beli=
eve the issue is on the server side.=0A=
>>>=0A=
>>> Issue=0A=
>>> NFSv4 mounts periodically hang on the NFS Client.=0A=
>>>=0A=
>>> During this time, it is possible to manually mount from another NFS Ser=
ver on the NFS Client having issues.=0A=
>>> Also, other NFS Clients are successfully mounting from the NFS Server i=
n question.=0A=
>>> Rebooting the NFS Client appears to be the only solution.=0A=
>>=0A=
>> I had experienced a similar weird situation with periodically stuck Linu=
x NFS clients >mounting Isilon NFS servers (Isilon is FreeBSD based but the=
y seem to have there >own nfsd)=0A=
> Yes, my understanding is that Isilon uses a proprietary user space nfsd a=
nd=0A=
> not the kernel based RPC and nfsd in FreeBSD.=0A=
>=0A=
>> We=92ve had better luck and we did manage to have packet captures on bot=
h sides >during the issue. The gist of it goes like follows:=0A=
>>=0A=
>> - Data flows correctly between SERVER and the CLIENT=0A=
>> - At some point SERVER starts decreasing it's TCP Receive Window until i=
t reachs 0=0A=
>> - The client (eager to send data) can only ack data sent by SERVER.=0A=
>> - When SERVER was done sending data, the client starts sending TCP Windo=
w >Probes hoping that the TCP Window opens again so he can flush its buffer=
s.=0A=
>> - SERVER responds with a TCP Zero Window to those probes.=0A=
> Having the window size drop to zero is not necessarily incorrect.=0A=
> If the server is overloaded (has a backlog of NFS requests), it can stop =
doing=0A=
> soreceive() on the socket (so the socket rcv buffer can fill up and the T=
CP window=0A=
> closes). This results in "backpressure" to stop the NFS client from flood=
ing the=0A=
> NFS server with requests.=0A=
> --> However, once the backlog is handled, the nfsd should start to sorece=
ive()=0A=
> again and this shouls cause the window to open back up.=0A=
> --> Maybe this is broken in the socket/TCP code. I quickly got lost in=0A=
> tcp_output() when it decides what to do about the rcvwin.=0A=
>=0A=
>> - After 6 minutes (the NFS server default Idle timeout) SERVER racefully=
 closes the >TCP connection sending a FIN Packet (and still a TCP Window 0)=
=0A=
> This probably does not happen for Jason's case, since the 6minute timeout=
=0A=
> is disabled when the TCP connection is assigned as a backchannel (most li=
kely=0A=
> the case for NFSv4.1).=0A=
>=0A=
>> - CLIENT ACK that FIN.=0A=
>> - SERVER goes in FIN_WAIT_2 state=0A=
>> - CLIENT closes its half part part of the socket and goes in LAST_ACK st=
ate.=0A=
>> - FIN is never sent by the client since there still data in its SendQ an=
d receiver TCP >Window is still 0. At this stage the client starts sending =
TCP Window Probes again >and again hoping that the server opens its TCP Win=
dow so it can flush it's buffers >and terminate its side of the socket.=0A=
>> - SERVER keeps responding with a TCP Zero Window to those probes.=0A=
>> =3D> The last two steps goes on and on for hours/days freezing the NFS m=
ount bound >to that TCP session.=0A=
>>=0A=
>> If we had a situation where CLIENT was responsible for closing the TCP W=
indow (and >initiating the TCP FIN first) and server wanting to send data w=
e=92ll end up in the same >state as you I think.=0A=
>>=0A=
>> We=92ve never had the root cause of why the SERVER decided to close the =
TCP >Window and no more acccept data, the fix on the Isilon part was to rec=
ycle more >aggressively the FIN_WAIT_2 sockets (net.inet.tcp.fast_finwait2_=
recycle=3D1 & >net.inet.tcp.finwait2_timeout=3D5000). Once the socket recyc=
led and at the next >occurence of CLIENT TCP Window probe, SERVER sends a R=
ST, triggering the >teardown of the session on the client side, a new TCP h=
andchake, etc and traffic >flows again (NFS starts responding)=0A=
>>=0A=
>> To avoid rebooting the client (and before the aggressive FIN_WAIT_2 was =
>implemented on the Isilon side) we=92ve added a check script on the client=
 that detects >LAST_ACK sockets on the client and through iptables rule enf=
orces a TCP RST, >Something like: -A OUTPUT -p tcp -d $nfs_server_addr --sp=
ort $local_port -j REJECT >--reject-with tcp-reset (the script removes this=
 iptables rule as soon as the LAST_ACK >disappears)=0A=
>>=0A=
>> The bottom line would be to have a packet capture during the outage (cli=
ent and/or >server side), it will show you at least the shape of the TCP ex=
change when NFS is >stuck.=0A=
> Interesting story and good work w.r.t. sluething, Youssef, thanks.=0A=
>=0A=
> I looked at Jason's log and it shows everything is ok w.r.t the nfsd thre=
ads.=0A=
> (They're just waiting for RPC requests.)=0A=
> However, I do now think I know why the soclose() does not happen.=0A=
> When the TCP connection is assigned as a backchannel, that takes a refere=
nce=0A=
> cnt on the structure. This refcnt won't be released until the connection =
is=0A=
> replaced by a BindConnectiotoSession operation from the client. But that =
won't=0A=
> happen until the client creates a new TCP connection.=0A=
> --> No refcnt release-->no refcnt of 0-->no soclose().=0A=
>=0A=
> I've created the attached patch (completely different from the previous o=
ne)=0A=
> that adds soshutdown(SHUT_WR) calls in the three places where the TCP=0A=
> connection is going away. This seems to get it past CLOSE_WAIT without a=
=0A=
> soclose().=0A=
> --> I know you are not comfortable with patching your server, but I do th=
ink=0A=
> this change will get the socket shutdown to complete.=0A=
>=0A=
> There are a couple more things you can check on the server...=0A=
> # nfsstat -E -s=0A=
> --> Look for the count under "BindConnToSes".=0A=
> --> If non-zero, backchannels have been assigned=0A=
> # sysctl -a | fgrep request_space_throttle_count=0A=
> --> If non-zero, the server has been overloaded at some point.=0A=
>=0A=
> I think the attached patch might work around the problem.=0A=
> The code that should open up the receive window needs to be checked.=0A=
> I am also looking at enabling the 6minute timeout when a backchannel is=
=0A=
> assigned.=0A=
>=0A=
> rick=0A=
>=0A=
> Youssef=0A=
>=0A=
> _______________________________________________=0A=
> freebsd-net@freebsd.org<mailto:freebsd-net@freebsd.org> mailing list=0A=
> https://urldefense.com/v3/__https://lists.freebsd.org/mailman/listinfo/fr=
eebsd-net__;!!JFdNOqOXpB6UZW0!_c2MFNbir59GXudWPVdE5bNBm-qqjXeBuJ2UEmFv5OZci=
Lj4ObR_drJNv5yryaERfIbhKR2d$=0A=
> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org<mai=
lto:freebsd-net-unsubscribe@freebsd.org>"=0A=
> <xprtdied.patch>=0A=
>=0A=
> <nfs-hang.log.gz>=0A=
>=0A=
> _______________________________________________=0A=
> freebsd-net@freebsd.org mailing list=0A=
> https://lists.freebsd.org/mailman/listinfo/freebsd-net=0A=
> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"=0A=
> _______________________________________________=0A=
> freebsd-net@freebsd.org mailing list=0A=
> https://lists.freebsd.org/mailman/listinfo/freebsd-net=0A=
> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"=0A=
=0A=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?YQXPR0101MB0968E0A17D8BCACFAF132225DD7A9>