From owner-freebsd-net@freebsd.org Sat Mar 27 12:20:14 2021 Return-Path: Delivered-To: freebsd-net@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id C13B85C3511 for ; Sat, 27 Mar 2021 12:20:14 +0000 (UTC) (envelope-from jbreitman@tildenparkcapital.com) Received: from us-smtp-delivery-145.mimecast.com (us-smtp-delivery-145.mimecast.com [216.205.24.145]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "*.mimecast.com", Issuer "DigiCert TLS RSA SHA256 2020 CA1" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4F6ybK68fQz3Qjf for ; Sat, 27 Mar 2021 12:20:13 +0000 (UTC) (envelope-from jbreitman@tildenparkcapital.com) Received: from zmcc-3-mta-2.zmailcloud.com (zmcc-3-mta-2.zmailcloud.com [35.238.170.66]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-339-GXBxTzh4N8afmI9LJhQ_zA-1; Sat, 27 Mar 2021 08:20:10 -0400 X-MC-Unique: GXBxTzh4N8afmI9LJhQ_zA-1 Received: from zmcc-3-mta-2.zmailcloud.com (localhost [127.0.0.1]) by zmcc-3-mta-2.zmailcloud.com (Postfix) with ESMTPS id 543CEE0036; Sat, 27 Mar 2021 07:20:09 -0500 (CDT) Received: from localhost (localhost [127.0.0.1]) by zmcc-3-mta-2.zmailcloud.com (Postfix) with ESMTP id 40FC0E0ADE; Sat, 27 Mar 2021 07:20:09 -0500 (CDT) X-Virus-Scanned: amavisd-new at zmcc-3-mta-2.zmailcloud.com Received: from zmcc-3-mta-2.zmailcloud.com ([127.0.0.1]) by localhost (zmcc-3-mta-2.zmailcloud.com [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id uFXtD3eK79_y; Sat, 27 Mar 2021 07:20:08 -0500 (CDT) Received: from jbreitman-mac.zxcvm.com (unknown [72.22.182.150]) by zmcc-3-mta-2.zmailcloud.com (Postfix) with ESMTPSA id BF163E0036; Sat, 27 Mar 2021 07:20:08 -0500 (CDT) From: Jason Breitman Message-Id: Mime-Version: 1.0 (Mac OS X Mail 13.4 \(3608.120.23.2.4\)) Subject: Re: NFS Mount Hangs Date: Sat, 27 Mar 2021 08:20:07 -0400 In-Reply-To: Cc: Youssef GHORBAL , "freebsd-net@freebsd.org" To: Rick Macklem References: <3750001D-3F1C-4D9A-A9D9-98BCA6CA65A4@tildenparkcapital.com> <33693DE3-7FF8-4FAB-9A75-75576B88A566@tildenparkcapital.com> X-Mailer: Apple Mail (2.3608.120.23.2.4) X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: tildenparkcapital.com Content-Type: multipart/mixed; boundary="Apple-Mail=_ADF864C5-85D0-420D-9443-176A64A1E111" X-Rspamd-Queue-Id: 4F6ybK68fQz3Qjf X-Spamd-Bar: -- Authentication-Results: mx1.freebsd.org; dkim=none; dmarc=none; spf=pass (mx1.freebsd.org: domain of jbreitman@tildenparkcapital.com designates 216.205.24.145 as permitted sender) smtp.mailfrom=jbreitman@tildenparkcapital.com X-Spamd-Result: default: False [-2.90 / 15.00]; TO_DN_EQ_ADDR_SOME(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; RCVD_COUNT_FIVE(0.00)[6]; MID_RHS_MATCH_FROM(0.00)[]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[3]; MV_CASE(0.50)[]; R_SPF_ALLOW(-0.20)[+ip4:216.205.24.0/24]; HAS_ATTACHMENT(0.00)[]; MIME_GOOD(-0.10)[multipart/mixed,text/plain]; RWL_MAILSPIKE_VERYGOOD(0.00)[216.205.24.145:from]; TO_DN_SOME(0.00)[]; DMARC_NA(0.00)[tildenparkcapital.com: no valid DMARC record]; NEURAL_HAM_LONG(-1.00)[-1.000]; TO_MATCH_ENVRCPT_SOME(0.00)[]; NEURAL_HAM_SHORT(-1.00)[-1.000]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+,1:+,2:~,3:+]; ASN(0.00)[asn:30031, ipnet:216.205.24.0/24, country:US]; RCVD_TLS_LAST(0.00)[]; MAILMAN_DEST(0.00)[freebsd-net]; RCVD_IN_DNSWL_LOW(-0.10)[216.205.24.145:from] X-Content-Filtered-By: Mailman/MimeDel 2.1.34 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 27 Mar 2021 12:20:14 -0000 --Apple-Mail=_ADF864C5-85D0-420D-9443-176A64A1E111 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=WINDOWS-1252 The issue happened again so we can say that disabling TSO and LRO on the NI= C did not resolve this issue. # ifconfig lagg0 -rxcsum -rxcsum6 -txcsum -txcsum6 -lro -tso -vlanhwtso # ifconfig lagg0 lagg0: flags=3D8943 metric = 0 mtu 1500 =09options=3D8100b8 We can also say that the sysctl settings did not resolve this issue. # sysctl net.inet.tcp.fast_finwait2_recycle=3D1 net.inet.tcp.fast_finwait2_recycle: 0 -> 1 # sysctl net.inet.tcp.finwait2_timeout=3D1000 net.inet.tcp.finwait2_timeout: 60000 -> 1000 * I have not rebooted the NFS Server nor have I restarted nfsd, but do not = believe that is required as these settings are at the TCP level and I would= expect new sessions to use the updated settings. The issue occurred after 5 days following a reboot of the client machines. I ran the capture information again to make use of the situation. #!/bin/sh while true do /bin/date >> /tmp/nfs-hang.log /bin/ps axHl | grep nfsd | grep -v grep >> /tmp/nfs-hang.log /usr/bin/procstat -kk 2947 >> /tmp/nfs-hang.log /usr/bin/procstat -kk 2944 >> /tmp/nfs-hang.log /bin/sleep 60 done On the NFS Server Active Internet connections (including servers) Proto Recv-Q Send-Q Local Address Foreign Address (state) tcp4 0 0 NFS.Server.IP.X.2049 NFS.Client.IP.X.48286 CLO= SE_WAIT=20 On the NFS Client tcp 0 0 NFS.Client.IP.X:48286 NFS.Server.IP.X:2049 F= IN_WAIT2 --Apple-Mail=_ADF864C5-85D0-420D-9443-176A64A1E111 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=UTF-8 You had also asked for the output below. # nfsstat -E -s BackChannelCtBindConnToSes =20 0 0 # sysctl vfs.nfsd.request_space_throttle_count vfs.nfsd.request_space_throttle_count: 0 I see that you are testing a patch and I look forward to seeing the results= . Jason Breitman On Mar 21, 2021, at 6:21 PM, Rick Macklem wrote: Youssef GHORBAL wrote: >Hi Jason, > >> On 17 Mar 2021, at 18:17, Jason Breitman wrote: >> >> Please review the details below and let me know if there is a setting th= at I should apply to my FreeBSD NFS Server or if there is a bug fix that I = can apply to resolve my issue. >> I shared this information with the linux-nfs mailing list and they belie= ve the issue is on the server side. >> >> Issue >> NFSv4 mounts periodically hang on the NFS Client. >> >> During this time, it is possible to manually mount from another NFS Serv= er on the NFS Client having issues. >> Also, other NFS Clients are successfully mounting from the NFS Server in= question. >> Rebooting the NFS Client appears to be the only solution. > >I had experienced a similar weird situation with periodically stuck Linux = NFS clients >mounting Isilon NFS servers (Isilon is FreeBSD based but they = seem to have there >own nfsd) Yes, my understanding is that Isilon uses a proprietary user space nfsd and not the kernel based RPC and nfsd in FreeBSD. >We=E2=80=99ve had better luck and we did manage to have packet captures on= both sides >during the issue. The gist of it goes like follows: > >- Data flows correctly between SERVER and the CLIENT >- At some point SERVER starts decreasing it's TCP Receive Window until it = reachs 0 >- The client (eager to send data) can only ack data sent by SERVER. >- When SERVER was done sending data, the client starts sending TCP Window = >Probes hoping that the TCP Window opens again so he can flush its buffers. >- SERVER responds with a TCP Zero Window to those probes. Having the window size drop to zero is not necessarily incorrect. If the server is overloaded (has a backlog of NFS requests), it can stop do= ing soreceive() on the socket (so the socket rcv buffer can fill up and the TCP= window closes). This results in "backpressure" to stop the NFS client from floodin= g the NFS server with requests. --> However, once the backlog is handled, the nfsd should start to soreceiv= e() again and this shouls cause the window to open back up. --> Maybe this is broken in the socket/TCP code. I quickly got lost in tcp_output() when it decides what to do about the rcvwin. >- After 6 minutes (the NFS server default Idle timeout) SERVER racefully c= loses the >TCP connection sending a FIN Packet (and still a TCP Window 0) This probably does not happen for Jason's case, since the 6minute timeout is disabled when the TCP connection is assigned as a backchannel (most like= ly the case for NFSv4.1). >- CLIENT ACK that FIN. >- SERVER goes in FIN_WAIT_2 state >- CLIENT closes its half part part of the socket and goes in LAST_ACK stat= e. >- FIN is never sent by the client since there still data in its SendQ and = receiver TCP >Window is still 0. At this stage the client starts sending TC= P Window Probes again >and again hoping that the server opens its TCP Windo= w so it can flush it's buffers >and terminate its side of the socket. >- SERVER keeps responding with a TCP Zero Window to those probes. >=3D> The last two steps goes on and on for hours/days freezing the NFS mou= nt bound >to that TCP session. > >If we had a situation where CLIENT was responsible for closing the TCP Win= dow (and >initiating the TCP FIN first) and server wanting to send data we= =E2=80=99ll end up in the same >state as you I think. > >We=E2=80=99ve never had the root cause of why the SERVER decided to close = the TCP >Window and no more acccept data, the fix on the Isilon part was to= recycle more >aggressively the FIN_WAIT_2 sockets (net.inet.tcp.fast_finwa= it2_recycle=3D1 & >net.inet.tcp.finwait2_timeout=3D5000). Once the socket r= ecycled and at the next >occurence of CLIENT TCP Window probe, SERVER sends= a RST, triggering the >teardown of the session on the client side, a new T= CP handchake, etc and traffic >flows again (NFS starts responding) > >To avoid rebooting the client (and before the aggressive FIN_WAIT_2 was >i= mplemented on the Isilon side) we=E2=80=99ve added a check script on the cl= ient that detects >LAST_ACK sockets on the client and through iptables rule= enforces a TCP RST, >Something like: -A OUTPUT -p tcp -d $nfs_server_addr = --sport $local_port -j REJECT >--reject-with tcp-reset (the script removes = this iptables rule as soon as the LAST_ACK >disappears) > >The bottom line would be to have a packet capture during the outage (clien= t and/or >server side), it will show you at least the shape of the TCP exch= ange when NFS is >stuck. Interesting story and good work w.r.t. sluething, Youssef, thanks. I looked at Jason's log and it shows everything is ok w.r.t the nfsd thread= s. (They're just waiting for RPC requests.) However, I do now think I know why the soclose() does not happen. When the TCP connection is assigned as a backchannel, that takes a referenc= e cnt on the structure. This refcnt won't be released until the connection is replaced by a BindConnectiotoSession operation from the client. But that wo= n't happen until the client creates a new TCP connection. --> No refcnt release-->no refcnt of 0-->no soclose(). I've created the attached patch (completely different from the previous one= ) that adds soshutdown(SHUT_WR) calls in the three places where the TCP connection is going away. This seems to get it past CLOSE_WAIT without a soclose(). --> I know you are not comfortable with patching your server, but I do thin= k this change will get the socket shutdown to complete. There are a couple more things you can check on the server... # nfsstat -E -s --> Look for the count under "BindConnToSes". --> If non-zero, backchannels have been assigned # sysctl -a | fgrep request_space_throttle_count --> If non-zero, the server has been overloaded at some point. I think the attached patch might work around the problem. The code that should open up the receive window needs to be checked. I am also looking at enabling the 6minute timeout when a backchannel is assigned. rick Youssef _______________________________________________ freebsd-net@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" --Apple-Mail=_ADF864C5-85D0-420D-9443-176A64A1E111--