Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 2 Apr 2021 18:15:16 +0200
From:      tuexen@freebsd.org
To:        Rick Macklem <rmacklem@uoguelph.ca>
Cc:        Youssef GHORBAL <youssef.ghorbal@pasteur.fr>, Jason Breitman <jbreitman@tildenparkcapital.com>, "freebsd-net@freebsd.org" <freebsd-net@freebsd.org>
Subject:   Re: NFS Mount Hangs
Message-ID:  <DEF8564D-0FE9-4C2C-9F3B-9BCDD423377C@freebsd.org>
In-Reply-To: <YQXPR0101MB0968C44C7C82A3EB64F384D0DD7B9@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM>
References:  <C643BB9C-6B61-4DAC-8CF9-CE04EA7292D0@tildenparkcapital.com> <3750001D-3F1C-4D9A-A9D9-98BCA6CA65A4@tildenparkcapital.com> <33693DE3-7FF8-4FAB-9A75-75576B88A566@tildenparkcapital.com> <D67AF317-D238-4EC0-8C7F-22D54AD5144C@pasteur.fr> <YQXPR0101MB09684AB7BEFA911213604467DD669@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM> <C87066D3-BBF1-44E1-8398-E4EB6903B0F2@tildenparkcapital.com> <8E745920-1092-4312-B251-B49D11FE8028@pasteur.fr> <YQXPR0101MB0968C44C7C82A3EB64F384D0DD7B9@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM>

next in thread | previous in thread | raw e-mail | index | archive | help
> On 2. Apr 2021, at 02:07, Rick Macklem <rmacklem@uoguelph.ca> wrote:
>=20
> I hope you don't mind a top post...
> I've been testing network partitioning between the only Linux client
> I have (5.2 kernel) and a FreeBSD server with the xprtdied.patch
> (does soshutdown(..SHUT_WR) when it knows the socket is broken)
> applied to it.
>=20
> I'm not enough of a TCP guy to know if this is useful, but here's what
> I see...
>=20
> While partitioned:
> On the FreeBSD server end, the socket either goes to CLOSED during
> the network partition or stays ESTABLISHED.
If it goes to CLOSED you called shutdown(, SHUT_WR) and the peer also
sent a FIN, but you never called close() on the socket.
If the socket stays in ESTABLISHED, there is no communication ongoing,
I guess, and therefore the server does not even detect that the peer
is not reachable.
> On the Linux end, the socket seems to remain ESTABLISHED for a
> little while, and then disappears.
So how does Linux detect the peer is not reachable?
>=20
> After unpartitioning:
> On the FreeBSD server end, you get another socket showing up at
> the same port#
> Active Internet connections (including servers)
> Proto Recv-Q Send-Q Local Address          Foreign Address        =
(state)   =20
> tcp4       0      0 nfsv4-new3.nfsd        nfsv4-linux.678        =
ESTABLISHED
> tcp4       0      0 nfsv4-new3.nfsd        nfsv4-linux.678        =
CLOSED    =20
>=20
> The Linux client shows the same connection ESTABLISHED.
> (The mount sometimes reports an error. I haven't looked at packet
> traces to see if it retries RPCs or why the errors occur.)
> --> However I never get hangs.
> Sometimes it goes to SYN_SENT for a while and the FreeBSD server
> shows FIN_WAIT_1, but then both ends go to ESTABLISHED and the
> mount starts working again.
>=20
> The most obvious thing is that the Linux client always keeps using
> the same port#. (The FreeBSD client will use a different port# when
> it does a TCP reconnect after no response from the NFS server for
> a little while.)
>=20
> What do those TCP conversant think?
I guess you are you are never calling close() on the socket, for with
the connection state is CLOSED.

Best regards
Michael
>=20
> rick
> ps: I can capture packets while doing this, if anyone has a use
>      for them.
>=20
>=20
>=20
>=20
>=20
>=20
> ________________________________________
> From: owner-freebsd-net@freebsd.org <owner-freebsd-net@freebsd.org> on =
behalf of Youssef  GHORBAL <youssef.ghorbal@pasteur.fr>
> Sent: Saturday, March 27, 2021 6:57 PM
> To: Jason Breitman
> Cc: Rick Macklem; freebsd-net@freebsd.org
> Subject: Re: NFS Mount Hangs
>=20
> CAUTION: This email originated from outside of the University of =
Guelph. Do not click links or open attachments unless you recognize the =
sender and know the content is safe. If in doubt, forward suspicious =
emails to IThelp@uoguelph.ca
>=20
>=20
>=20
>=20
> On 27 Mar 2021, at 13:20, Jason Breitman =
<jbreitman@tildenparkcapital.com<mailto:jbreitman@tildenparkcapital.com>> =
wrote:
>=20
> The issue happened again so we can say that disabling TSO and LRO on =
the NIC did not resolve this issue.
> # ifconfig lagg0 -rxcsum -rxcsum6 -txcsum -txcsum6 -lro -tso =
-vlanhwtso
> # ifconfig lagg0
> lagg0: flags=3D8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> =
metric 0 mtu 1500
>        =
options=3D8100b8<VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,VLAN_HWFILT=
ER>
>=20
> We can also say that the sysctl settings did not resolve this issue.
>=20
> # sysctl net.inet.tcp.fast_finwait2_recycle=3D1
> net.inet.tcp.fast_finwait2_recycle: 0 -> 1
>=20
> # sysctl net.inet.tcp.finwait2_timeout=3D1000
> net.inet.tcp.finwait2_timeout: 60000 -> 1000
>=20
> I don=E2=80=99t think those will do anything in your case since the =
FIN_WAIT2 are on the client side and those sysctls are for BSD.
> By the way it seems that Linux recycles automatically TCP sessions in =
FIN_WAIT2 after 60 seconds (sysctl net.ipv4.tcp_fin_timeout)
>=20
> tcp_fin_timeout (integer; default: 60; since Linux 2.2)
>              This specifies how many seconds to wait for a final FIN
>              packet before the socket is forcibly closed.  This is
>              strictly a violation of the TCP specification, but
>              required to prevent denial-of-service attacks.  In Linux
>              2.2, the default value was 180.
>=20
> So I don=E2=80=99t get why it stucks in the FIN_WAIT2 state anyway.
>=20
> You really need to have a packet capture during the outage (client and =
server side) so you=E2=80=99ll get over the wire chat and start =
speculating from there.
> No need to capture the beginning of the outage for now. All you have =
to do, is run a tcpdump for 10 minutes or so when you notice a client =
stuck.
>=20
> * I have not rebooted the NFS Server nor have I restarted nfsd, but do =
not believe that is required as these settings are at the TCP level and =
I would expect new sessions to use the updated settings.
>=20
> The issue occurred after 5 days following a reboot of the client =
machines.
> I ran the capture information again to make use of the situation.
>=20
> #!/bin/sh
>=20
> while true
> do
>  /bin/date >> /tmp/nfs-hang.log
>  /bin/ps axHl | grep nfsd | grep -v grep >> /tmp/nfs-hang.log
>  /usr/bin/procstat -kk 2947 >> /tmp/nfs-hang.log
>  /usr/bin/procstat -kk 2944 >> /tmp/nfs-hang.log
>  /bin/sleep 60
> done
>=20
>=20
> On the NFS Server
> Active Internet connections (including servers)
> Proto Recv-Q Send-Q Local Address          Foreign Address        =
(state)
> tcp4       0      0 NFS.Server.IP.X.2049      NFS.Client.IP.X.48286    =
 CLOSE_WAIT
>=20
> On the NFS Client
> tcp        0      0 NFS.Client.IP.X:48286      NFS.Server.IP.X:2049    =
   FIN_WAIT2
>=20
>=20
>=20
> You had also asked for the output below.
>=20
> # nfsstat -E -s
> BackChannelCtBindConnToSes
>            0            0
>=20
> # sysctl vfs.nfsd.request_space_throttle_count
> vfs.nfsd.request_space_throttle_count: 0
>=20
> I see that you are testing a patch and I look forward to seeing the =
results.
>=20
>=20
> Jason Breitman
>=20
>=20
> On Mar 21, 2021, at 6:21 PM, Rick Macklem =
<rmacklem@uoguelph.ca<mailto:rmacklem@uoguelph.ca>> wrote:
>=20
> Youssef GHORBAL =
<youssef.ghorbal@pasteur.fr<mailto:youssef.ghorbal@pasteur.fr>> wrote:
>> Hi Jason,
>>=20
>>> On 17 Mar 2021, at 18:17, Jason Breitman =
<jbreitman@tildenparkcapital.com<mailto:jbreitman@tildenparkcapital.com>> =
wrote:
>>>=20
>>> Please review the details below and let me know if there is a =
setting that I should apply to my FreeBSD NFS Server or if there is a =
bug fix that I can apply to resolve my issue.
>>> I shared this information with the linux-nfs mailing list and they =
believe the issue is on the server side.
>>>=20
>>> Issue
>>> NFSv4 mounts periodically hang on the NFS Client.
>>>=20
>>> During this time, it is possible to manually mount from another NFS =
Server on the NFS Client having issues.
>>> Also, other NFS Clients are successfully mounting from the NFS =
Server in question.
>>> Rebooting the NFS Client appears to be the only solution.
>>=20
>> I had experienced a similar weird situation with periodically stuck =
Linux NFS clients >mounting Isilon NFS servers (Isilon is FreeBSD based =
but they seem to have there >own nfsd)
> Yes, my understanding is that Isilon uses a proprietary user space =
nfsd and
> not the kernel based RPC and nfsd in FreeBSD.
>=20
>> We=E2=80=99ve had better luck and we did manage to have packet =
captures on both sides >during the issue. The gist of it goes like =
follows:
>>=20
>> - Data flows correctly between SERVER and the CLIENT
>> - At some point SERVER starts decreasing it's TCP Receive Window =
until it reachs 0
>> - The client (eager to send data) can only ack data sent by SERVER.
>> - When SERVER was done sending data, the client starts sending TCP =
Window >Probes hoping that the TCP Window opens again so he can flush =
its buffers.
>> - SERVER responds with a TCP Zero Window to those probes.
> Having the window size drop to zero is not necessarily incorrect.
> If the server is overloaded (has a backlog of NFS requests), it can =
stop doing
> soreceive() on the socket (so the socket rcv buffer can fill up and =
the TCP window
> closes). This results in "backpressure" to stop the NFS client from =
flooding the
> NFS server with requests.
> --> However, once the backlog is handled, the nfsd should start to =
soreceive()
> again and this shouls cause the window to open back up.
> --> Maybe this is broken in the socket/TCP code. I quickly got lost in
> tcp_output() when it decides what to do about the rcvwin.
>=20
>> - After 6 minutes (the NFS server default Idle timeout) SERVER =
racefully closes the >TCP connection sending a FIN Packet (and still a =
TCP Window 0)
> This probably does not happen for Jason's case, since the 6minute =
timeout
> is disabled when the TCP connection is assigned as a backchannel (most =
likely
> the case for NFSv4.1).
>=20
>> - CLIENT ACK that FIN.
>> - SERVER goes in FIN_WAIT_2 state
>> - CLIENT closes its half part part of the socket and goes in LAST_ACK =
state.
>> - FIN is never sent by the client since there still data in its SendQ =
and receiver TCP >Window is still 0. At this stage the client starts =
sending TCP Window Probes again >and again hoping that the server opens =
its TCP Window so it can flush it's buffers >and terminate its side of =
the socket.
>> - SERVER keeps responding with a TCP Zero Window to those probes.
>> =3D> The last two steps goes on and on for hours/days freezing the =
NFS mount bound >to that TCP session.
>>=20
>> If we had a situation where CLIENT was responsible for closing the =
TCP Window (and >initiating the TCP FIN first) and server wanting to =
send data we=E2=80=99ll end up in the same >state as you I think.
>>=20
>> We=E2=80=99ve never had the root cause of why the SERVER decided to =
close the TCP >Window and no more acccept data, the fix on the Isilon =
part was to recycle more >aggressively the FIN_WAIT_2 sockets =
(net.inet.tcp.fast_finwait2_recycle=3D1 & =
>net.inet.tcp.finwait2_timeout=3D5000). Once the socket recycled and at =
the next >occurence of CLIENT TCP Window probe, SERVER sends a RST, =
triggering the >teardown of the session on the client side, a new TCP =
handchake, etc and traffic >flows again (NFS starts responding)
>>=20
>> To avoid rebooting the client (and before the aggressive FIN_WAIT_2 =
was >implemented on the Isilon side) we=E2=80=99ve added a check script =
on the client that detects >LAST_ACK sockets on the client and through =
iptables rule enforces a TCP RST, >Something like: -A OUTPUT -p tcp -d =
$nfs_server_addr --sport $local_port -j REJECT >--reject-with tcp-reset =
(the script removes this iptables rule as soon as the LAST_ACK =
>disappears)
>>=20
>> The bottom line would be to have a packet capture during the outage =
(client and/or >server side), it will show you at least the shape of the =
TCP exchange when NFS is >stuck.
> Interesting story and good work w.r.t. sluething, Youssef, thanks.
>=20
> I looked at Jason's log and it shows everything is ok w.r.t the nfsd =
threads.
> (They're just waiting for RPC requests.)
> However, I do now think I know why the soclose() does not happen.
> When the TCP connection is assigned as a backchannel, that takes a =
reference
> cnt on the structure. This refcnt won't be released until the =
connection is
> replaced by a BindConnectiotoSession operation from the client. But =
that won't
> happen until the client creates a new TCP connection.
> --> No refcnt release-->no refcnt of 0-->no soclose().
>=20
> I've created the attached patch (completely different from the =
previous one)
> that adds soshutdown(SHUT_WR) calls in the three places where the TCP
> connection is going away. This seems to get it past CLOSE_WAIT without =
a
> soclose().
> --> I know you are not comfortable with patching your server, but I do =
think
> this change will get the socket shutdown to complete.
>=20
> There are a couple more things you can check on the server...
> # nfsstat -E -s
> --> Look for the count under "BindConnToSes".
> --> If non-zero, backchannels have been assigned
> # sysctl -a | fgrep request_space_throttle_count
> --> If non-zero, the server has been overloaded at some point.
>=20
> I think the attached patch might work around the problem.
> The code that should open up the receive window needs to be checked.
> I am also looking at enabling the 6minute timeout when a backchannel =
is
> assigned.
>=20
> rick
>=20
> Youssef
>=20
> _______________________________________________
> freebsd-net@freebsd.org<mailto:freebsd-net@freebsd.org> mailing list
> =
https://urldefense.com/v3/__https://lists.freebsd.org/mailman/listinfo/fre=
ebsd-net__;!!JFdNOqOXpB6UZW0!_c2MFNbir59GXudWPVdE5bNBm-qqjXeBuJ2UEmFv5OZci=
Lj4ObR_drJNv5yryaERfIbhKR2d$
> To unsubscribe, send any mail to =
"freebsd-net-unsubscribe@freebsd.org<mailto:freebsd-net-unsubscribe@freebs=
d.org>"
> <xprtdied.patch>
>=20
> <nfs-hang.log.gz>
>=20
> _______________________________________________
> freebsd-net@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"
> _______________________________________________
> freebsd-net@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?DEF8564D-0FE9-4C2C-9F3B-9BCDD423377C>