Date: Sat, 27 Mar 2021 08:20:07 -0400 From: Jason Breitman <jbreitman@tildenparkcapital.com> To: Rick Macklem <rmacklem@uoguelph.ca> Cc: Youssef GHORBAL <youssef.ghorbal@pasteur.fr>, "freebsd-net@freebsd.org" <freebsd-net@freebsd.org> Subject: Re: NFS Mount Hangs Message-ID: <C87066D3-BBF1-44E1-8398-E4EB6903B0F2@tildenparkcapital.com> In-Reply-To: <YQXPR0101MB09684AB7BEFA911213604467DD669@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM> References: <C643BB9C-6B61-4DAC-8CF9-CE04EA7292D0@tildenparkcapital.com> <3750001D-3F1C-4D9A-A9D9-98BCA6CA65A4@tildenparkcapital.com> <33693DE3-7FF8-4FAB-9A75-75576B88A566@tildenparkcapital.com> <D67AF317-D238-4EC0-8C7F-22D54AD5144C@pasteur.fr> <YQXPR0101MB09684AB7BEFA911213604467DD669@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM>
next in thread | previous in thread | raw e-mail | index | archive | help
--Apple-Mail=_ADF864C5-85D0-420D-9443-176A64A1E111 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=WINDOWS-1252 The issue happened again so we can say that disabling TSO and LRO on the NI= C did not resolve this issue. # ifconfig lagg0 -rxcsum -rxcsum6 -txcsum -txcsum6 -lro -tso -vlanhwtso # ifconfig lagg0 lagg0: flags=3D8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric = 0 mtu 1500 =09options=3D8100b8<VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,VLAN_HWFI= LTER> We can also say that the sysctl settings did not resolve this issue. # sysctl net.inet.tcp.fast_finwait2_recycle=3D1 net.inet.tcp.fast_finwait2_recycle: 0 -> 1 # sysctl net.inet.tcp.finwait2_timeout=3D1000 net.inet.tcp.finwait2_timeout: 60000 -> 1000 * I have not rebooted the NFS Server nor have I restarted nfsd, but do not = believe that is required as these settings are at the TCP level and I would= expect new sessions to use the updated settings. The issue occurred after 5 days following a reboot of the client machines. I ran the capture information again to make use of the situation. #!/bin/sh while true do /bin/date >> /tmp/nfs-hang.log /bin/ps axHl | grep nfsd | grep -v grep >> /tmp/nfs-hang.log /usr/bin/procstat -kk 2947 >> /tmp/nfs-hang.log /usr/bin/procstat -kk 2944 >> /tmp/nfs-hang.log /bin/sleep 60 done On the NFS Server Active Internet connections (including servers) Proto Recv-Q Send-Q Local Address Foreign Address (state) tcp4 0 0 NFS.Server.IP.X.2049 NFS.Client.IP.X.48286 CLO= SE_WAIT=20 On the NFS Client tcp 0 0 NFS.Client.IP.X:48286 NFS.Server.IP.X:2049 F= IN_WAIT2 --Apple-Mail=_ADF864C5-85D0-420D-9443-176A64A1E111 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=UTF-8 You had also asked for the output below. # nfsstat -E -s BackChannelCtBindConnToSes =20 0 0 # sysctl vfs.nfsd.request_space_throttle_count vfs.nfsd.request_space_throttle_count: 0 I see that you are testing a patch and I look forward to seeing the results= . Jason Breitman On Mar 21, 2021, at 6:21 PM, Rick Macklem <rmacklem@uoguelph.ca> wrote: Youssef GHORBAL <youssef.ghorbal@pasteur.fr> wrote: >Hi Jason, > >> On 17 Mar 2021, at 18:17, Jason Breitman <jbreitman@tildenparkcapital.co= m> wrote: >> >> Please review the details below and let me know if there is a setting th= at I should apply to my FreeBSD NFS Server or if there is a bug fix that I = can apply to resolve my issue. >> I shared this information with the linux-nfs mailing list and they belie= ve the issue is on the server side. >> >> Issue >> NFSv4 mounts periodically hang on the NFS Client. >> >> During this time, it is possible to manually mount from another NFS Serv= er on the NFS Client having issues. >> Also, other NFS Clients are successfully mounting from the NFS Server in= question. >> Rebooting the NFS Client appears to be the only solution. > >I had experienced a similar weird situation with periodically stuck Linux = NFS clients >mounting Isilon NFS servers (Isilon is FreeBSD based but they = seem to have there >own nfsd) Yes, my understanding is that Isilon uses a proprietary user space nfsd and not the kernel based RPC and nfsd in FreeBSD. >We=E2=80=99ve had better luck and we did manage to have packet captures on= both sides >during the issue. The gist of it goes like follows: > >- Data flows correctly between SERVER and the CLIENT >- At some point SERVER starts decreasing it's TCP Receive Window until it = reachs 0 >- The client (eager to send data) can only ack data sent by SERVER. >- When SERVER was done sending data, the client starts sending TCP Window = >Probes hoping that the TCP Window opens again so he can flush its buffers. >- SERVER responds with a TCP Zero Window to those probes. Having the window size drop to zero is not necessarily incorrect. If the server is overloaded (has a backlog of NFS requests), it can stop do= ing soreceive() on the socket (so the socket rcv buffer can fill up and the TCP= window closes). This results in "backpressure" to stop the NFS client from floodin= g the NFS server with requests. --> However, once the backlog is handled, the nfsd should start to soreceiv= e() again and this shouls cause the window to open back up. --> Maybe this is broken in the socket/TCP code. I quickly got lost in tcp_output() when it decides what to do about the rcvwin. >- After 6 minutes (the NFS server default Idle timeout) SERVER racefully c= loses the >TCP connection sending a FIN Packet (and still a TCP Window 0) This probably does not happen for Jason's case, since the 6minute timeout is disabled when the TCP connection is assigned as a backchannel (most like= ly the case for NFSv4.1). >- CLIENT ACK that FIN. >- SERVER goes in FIN_WAIT_2 state >- CLIENT closes its half part part of the socket and goes in LAST_ACK stat= e. >- FIN is never sent by the client since there still data in its SendQ and = receiver TCP >Window is still 0. At this stage the client starts sending TC= P Window Probes again >and again hoping that the server opens its TCP Windo= w so it can flush it's buffers >and terminate its side of the socket. >- SERVER keeps responding with a TCP Zero Window to those probes. >=3D> The last two steps goes on and on for hours/days freezing the NFS mou= nt bound >to that TCP session. > >If we had a situation where CLIENT was responsible for closing the TCP Win= dow (and >initiating the TCP FIN first) and server wanting to send data we= =E2=80=99ll end up in the same >state as you I think. > >We=E2=80=99ve never had the root cause of why the SERVER decided to close = the TCP >Window and no more acccept data, the fix on the Isilon part was to= recycle more >aggressively the FIN_WAIT_2 sockets (net.inet.tcp.fast_finwa= it2_recycle=3D1 & >net.inet.tcp.finwait2_timeout=3D5000). Once the socket r= ecycled and at the next >occurence of CLIENT TCP Window probe, SERVER sends= a RST, triggering the >teardown of the session on the client side, a new T= CP handchake, etc and traffic >flows again (NFS starts responding) > >To avoid rebooting the client (and before the aggressive FIN_WAIT_2 was >i= mplemented on the Isilon side) we=E2=80=99ve added a check script on the cl= ient that detects >LAST_ACK sockets on the client and through iptables rule= enforces a TCP RST, >Something like: -A OUTPUT -p tcp -d $nfs_server_addr = --sport $local_port -j REJECT >--reject-with tcp-reset (the script removes = this iptables rule as soon as the LAST_ACK >disappears) > >The bottom line would be to have a packet capture during the outage (clien= t and/or >server side), it will show you at least the shape of the TCP exch= ange when NFS is >stuck. Interesting story and good work w.r.t. sluething, Youssef, thanks. I looked at Jason's log and it shows everything is ok w.r.t the nfsd thread= s. (They're just waiting for RPC requests.) However, I do now think I know why the soclose() does not happen. When the TCP connection is assigned as a backchannel, that takes a referenc= e cnt on the structure. This refcnt won't be released until the connection is replaced by a BindConnectiotoSession operation from the client. But that wo= n't happen until the client creates a new TCP connection. --> No refcnt release-->no refcnt of 0-->no soclose(). I've created the attached patch (completely different from the previous one= ) that adds soshutdown(SHUT_WR) calls in the three places where the TCP connection is going away. This seems to get it past CLOSE_WAIT without a soclose(). --> I know you are not comfortable with patching your server, but I do thin= k this change will get the socket shutdown to complete. There are a couple more things you can check on the server... # nfsstat -E -s --> Look for the count under "BindConnToSes". --> If non-zero, backchannels have been assigned # sysctl -a | fgrep request_space_throttle_count --> If non-zero, the server has been overloaded at some point. I think the attached patch might work around the problem. The code that should open up the receive window needs to be checked. I am also looking at enabling the 6minute timeout when a backchannel is assigned. rick Youssef _______________________________________________ freebsd-net@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" <xprtdied.patch> --Apple-Mail=_ADF864C5-85D0-420D-9443-176A64A1E111--
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?C87066D3-BBF1-44E1-8398-E4EB6903B0F2>