Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 6 Apr 2021 09:12:40 +0200
From:      tuexen@freebsd.org
To:        Rick Macklem <rmacklem@uoguelph.ca>
Cc:        "Scheffenegger, Richard" <Richard.Scheffenegger@netapp.com>, Youssef GHORBAL <youssef.ghorbal@pasteur.fr>, "freebsd-net@freebsd.org" <freebsd-net@freebsd.org>
Subject:   Re: NFS Mount Hangs
Message-ID:  <E4A51EAD-8F9A-49BB-8852-F9D61BDD9EA4@freebsd.org>
In-Reply-To: <YQXPR0101MB096826445C85921C8F6410A2DD779@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM>
References:  <C643BB9C-6B61-4DAC-8CF9-CE04EA7292D0@tildenparkcapital.com> <3750001D-3F1C-4D9A-A9D9-98BCA6CA65A4@tildenparkcapital.com> <33693DE3-7FF8-4FAB-9A75-75576B88A566@tildenparkcapital.com> <D67AF317-D238-4EC0-8C7F-22D54AD5144C@pasteur.fr> <YQXPR0101MB09684AB7BEFA911213604467DD669@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM> <C87066D3-BBF1-44E1-8398-E4EB6903B0F2@tildenparkcapital.com> <8E745920-1092-4312-B251-B49D11FE8028@pasteur.fr> <YQXPR0101MB0968C44C7C82A3EB64F384D0DD7B9@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM> <DEF8564D-0FE9-4C2C-9F3B-9BCDD423377C@freebsd.org> <YQXPR0101MB0968E0A17D8BCACFAF132225DD7A9@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM> <SN4PR0601MB3728E392BCA494EAD49605FE86789@SN4PR0601MB3728.namprd06.prod.outlook.com> <YQXPR0101MB09686B4F921B96DCAFEBF874DD789@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM> <765CE1CD-6AAB-4BEF-97C6-C2A1F0FF4AC5@freebsd.org> <YQXPR0101MB096876B44F33BAD8991B62C8DD789@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM> <2B189169-C0C9-4DE6-A01A-BE916F10BABA@freebsd.org> <YQXPR0101MB09688645194907BBAA6E7C7ADD789@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM> <BF5D23D3-5DBD-4E29-9C6B-F4CCDC205353@freebsd.org> <YQXPR0101MB096826445C85921C8F6410A2DD779@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM>

next in thread | previous in thread | raw e-mail | index | archive | help
> On 6. Apr 2021, at 01:24, Rick Macklem <rmacklem@uoguelph.ca> wrote:
>=20
> tuexen@freebsd.org wrote:
> [stuff snipped]
>> OK. What is the FreeBSD version you are using?
> main Dec. 23, 2020.
>=20
>>=20
>> It seems that the TCP connection on the FreeBSD is still alive,
>> Linux has decided to start a new TCP connection using the old
>> port numbers. So it sends a SYN. The response is a challenge ACK
>> and Linux responds with a RST. This looks good so far. However,
>> FreeBSD should accept the RST and kill the TCP connection. The
>> next SYN from the Linux side would establish a new TCP connection.
>>=20
>> So I'm wondering why the RST is not accepted. I made the timestamp
>> checking stricter but introduced a bug where RST segments without
>> timestamps were ignored. This was fixed.
>>=20
>> Introduced in main on 2020/11/09:
>> https://svnweb.freebsd.org/changeset/base/367530
>> Introduced in stable/12 on 2020/11/30:
>> https://svnweb.freebsd.org/changeset/base/36818
>> Fix in main on 2021/01/13:
>> =
https://cgit.FreeBSD.org/src/commit/?id=3Dcc3c34859eab1b317d0f38731355b53f=
7d978c97
>> Fix in stable/12 on 2021/01/24:
>> =
https://cgit.FreeBSD.org/src/commit/?id=3Dd05d908d6d3c85479c84c707f9311484=
39ae826b
>>=20
>> Are you using a version which is affected by this bug?
> I was. Now I've applied the patch.
> Bad News. It did not fix the problem.
> It still gets into an endless "ignore RST" and stay established when
> the Send-Q is empty.
OK. Let us focus on this case.

Could you:
1. sudo sysctl net.inet.tcp.log_debug=3D1
2. repeat the situation where RSTs are ignored.
3. check if there is some output on the console (/var/log/messages).
4. Either provide the output or let me know that there is none.

Best regards
Michael
>=20
> If the Send-Q is non-empty when I partition, it recovers fine,
> sometimes not even needing to see an RST.
>=20
> rick
> ps: If you think there might be other recent changes that matter,
>    just say the word and I'll upgrade to bits de jur.
>=20
> rick
>=20
> Best regards
> Michael
>>=20
>> If I wait long enough before healing the partition, it will
>> go to FIN_WAIT_1, and then if I plug it back in, it does not
>> do battle (at least not for long).
>>=20
>> Btw, I have one running now that seems stuck really good.
>> It has been 20minutes since I plugged the net cable back in.
>> (Unfortunately, I didn't have tcpdump running until after
>> I saw it was not progressing after healing.
>> --> There is one difference. There was a 6minute timeout
>>      enabled on the server krpc for "no activity", which is
>>      now disabled like it is for NFSv4.1 in freebsd-current.
>>      I had forgotten to re-disable it.
>> So, when it does battle, it might have been the 6minute
>> timeout, which would then do the soshutdown(..SHUT_WR)
>> which kept it from getting "stuck" forever.
>> -->This time I had to reboot the FreeBSD NFS server to
>>    get the Linux client unstuck, so this one looked a lot
>>    like what has been reported.
>> The pcap for this one, started after the network was plugged
>> back in and I noticed it was stuck for quite a while is here:
>> fetch https://people.freebsd.org/~rmacklem/stuck.pcap
>>=20
>> In it, there is just a bunch of RST followed by SYN sent
>> from client->FreeBSD and FreeBSD just keeps sending
>> acks for the old segment back.
>> --> It looks like FreeBSD did the "RST, ACK" after the
>>      krpc did a soshutdown(..SHUT_WR) on the socket,
>>      for the one you've been looking at.
>> I'll test some more...
>>=20
>>> I would like to understand why the reestablishment of the connection
>>> did not work...
>> It is looking like it takes either a non-empty send-q or a
>> soshutdown(..SHUT_WR) to get the FreeBSD socket
>> out of established, where it just ignores the RSTs and
>> SYN packets.
>>=20
>> Thanks for looking at it, rick
>>=20
>> Best regards
>> Michael
>>>=20
>>> Have fun with it, rick
>>>=20
>>>=20
>>> ________________________________________
>>> From: tuexen@freebsd.org <tuexen@freebsd.org>
>>> Sent: Sunday, April 4, 2021 12:41 PM
>>> To: Rick Macklem
>>> Cc: Scheffenegger, Richard; Youssef GHORBAL; freebsd-net@freebsd.org
>>> Subject: Re: NFS Mount Hangs
>>>=20
>>> CAUTION: This email originated from outside of the University of =
Guelph. Do not click links or open attachments unless you recognize the =
sender and know the content is safe. If in doubt, forward suspicious =
emails to IThelp@uoguelph.ca
>>>=20
>>>=20
>>>> On 4. Apr 2021, at 17:27, Rick Macklem <rmacklem@uoguelph.ca> =
wrote:
>>>>=20
>>>> Well, I'm going to cheat and top post, since this is elated info. =
and
>>>> not really part of the discussion...
>>>>=20
>>>> I've been testing network partitioning between a Linux client (5.2 =
kernel)
>>>> and a FreeBSD-current NFS server. I have not gotten a solid hang, =
but
>>>> I have had the Linux client doing "battle" with the FreeBSD server =
for
>>>> several minutes after un-partitioning the connection.
>>>>=20
>>>> The battle basically consists of the Linux client sending an RST, =
followed
>>>> by a SYN.
>>>> The FreeBSD server ignores the RST and just replies with the same =
old ack.
>>>> --> This varies from "just a SYN" that succeeds to 100+ cycles of =
the above
>>>>    over several minutes.
>>>>=20
>>>> I had thought that an RST was a "pretty heavy hammer", but FreeBSD =
seems
>>>> pretty good at ignoring it.
>>>>=20
>>>> A full packet capture of one of these is in =
/home/rmacklem/linuxtofreenfs.pcap
>>>> in case anyone wants to look at it.
>>> On freefall? I would like to take a look at it...
>>>=20
>>> Best regards
>>> Michael
>>>>=20
>>>> Here's a tcpdump snippet of the interesting part (see the *** =
comments):
>>>> 19:10:09.305775 IP nfsv4-new3.home.rick.nfsd > =
nfsv4-linux.home.rick.apex-mesh: Flags [P.], seq 202585:202749, ack =
212293, win 29128, options [nop,nop,TS val 2073636037 ecr 2671204825], =
length 164: NFS reply xid 613153685 reply ok 160 getattr NON 4 ids =
0/33554432 sz 0
>>>> 19:10:09.305850 IP nfsv4-linux.home.rick.apex-mesh > =
nfsv4-new3.home.rick.nfsd: Flags [.], ack 202749, win 501, options =
[nop,nop,TS val 2671204825 ecr 2073636037], length 0
>>>> *** Network is now partitioned...
>>>>=20
>>>> 19:10:09.407840 IP nfsv4-linux.home.rick.apex-mesh > =
nfsv4-new3.home.rick.nfsd: Flags [P.], seq 212293:212525, ack 202749, =
win 501, options [nop,nop,TS val 2671204927 ecr 2073636037], length 232: =
NFS request xid 629930901 228 getattr fh 0,1/53
>>>> 19:10:09.615779 IP nfsv4-linux.home.rick.apex-mesh > =
nfsv4-new3.home.rick.nfsd: Flags [P.], seq 212293:212525, ack 202749, =
win 501, options [nop,nop,TS val 2671205135 ecr 2073636037], length 232: =
NFS request xid 629930901 228 getattr fh 0,1/53
>>>> 19:10:09.823780 IP nfsv4-linux.home.rick.apex-mesh > =
nfsv4-new3.home.rick.nfsd: Flags [P.], seq 212293:212525, ack 202749, =
win 501, options [nop,nop,TS val 2671205343 ecr 2073636037], length 232: =
NFS request xid 629930901 228 getattr fh 0,1/53
>>>> *** Lots of lines snipped.
>>>>=20
>>>>=20
>>>> 19:13:41.295783 ARP, Request who-has nfsv4-new3.home.rick tell =
nfsv4-linux.home.rick, length 28
>>>> 19:13:42.319767 ARP, Request who-has nfsv4-new3.home.rick tell =
nfsv4-linux.home.rick, length 28
>>>> 19:13:46.351966 ARP, Request who-has nfsv4-new3.home.rick tell =
nfsv4-linux.home.rick, length 28
>>>> 19:13:47.375790 ARP, Request who-has nfsv4-new3.home.rick tell =
nfsv4-linux.home.rick, length 28
>>>> 19:13:48.399786 ARP, Request who-has nfsv4-new3.home.rick tell =
nfsv4-linux.home.rick, length 28
>>>> *** Network is now unpartitioned...
>>>>=20
>>>> 19:13:48.399990 ARP, Reply nfsv4-new3.home.rick is-at =
d4:be:d9:07:81:72 (oui Unknown), length 46
>>>> 19:13:48.400002 IP nfsv4-linux.home.rick.apex-mesh > =
nfsv4-new3.home.rick.nfsd: Flags [S], seq 416692300, win 64240, options =
[mss 1460,sackOK,TS val 2671421871 ecr 0,nop,wscale 7], length 0
>>>> 19:13:48.400185 IP nfsv4-new3.home.rick.nfsd > =
nfsv4-linux.home.rick.apex-mesh: Flags [.], ack 212293, win 29127, =
options [nop,nop,TS val 2073855137 ecr 2671204825], length 0
>>>> 19:13:48.400273 IP nfsv4-linux.home.rick.apex-mesh > =
nfsv4-new3.home.rick.nfsd: Flags [R], seq 964161458, win 0, length 0
>>>> 19:13:49.423833 IP nfsv4-linux.home.rick.apex-mesh > =
nfsv4-new3.home.rick.nfsd: Flags [S], seq 416692300, win 64240, options =
[mss 1460,sackOK,TS val 2671424943 ecr 0,nop,wscale 7], length 0
>>>> 19:13:49.424056 IP nfsv4-new3.home.rick.nfsd > =
nfsv4-linux.home.rick.apex-mesh: Flags [.], ack 212293, win 29127, =
options [nop,nop,TS val 2073856161 ecr 2671204825], length 0
>>>> *** This "battle" goes on for 223sec...
>>>> I snipped out 13 cycles of this "Linux sends an RST, followed by =
SYN"
>>>> "FreeBSD replies with same old ACK". In another test run I saw this
>>>> cycle continue non-stop for several minutes. This time, the Linux
>>>> client paused for a while (see ARPs below).
>>>>=20
>>>> 19:13:49.424101 IP nfsv4-linux.home.rick.apex-mesh > =
nfsv4-new3.home.rick.nfsd: Flags [R], seq 964161458, win 0, length 0
>>>> 19:13:53.455867 IP nfsv4-linux.home.rick.apex-mesh > =
nfsv4-new3.home.rick.nfsd: Flags [S], seq 416692300, win 64240, options =
[mss 1460,sackOK,TS val 2671428975 ecr 0,nop,wscale 7], length 0
>>>> 19:13:53.455991 IP nfsv4-new3.home.rick.nfsd > =
nfsv4-linux.home.rick.apex-mesh: Flags [.], ack 212293, win 29127, =
options [nop,nop,TS val 2073860193 ecr 2671204825], length 0
>>>> *** Snipped a bunch of stuff out, mostly ARPs, plus one more RST.
>>>>=20
>>>> 19:16:57.775780 ARP, Request who-has nfsv4-new3.home.rick tell =
nfsv4-linux.home.rick, length 28
>>>> 19:16:57.775937 ARP, Reply nfsv4-new3.home.rick is-at =
d4:be:d9:07:81:72 (oui Unknown), length 46
>>>> 19:16:57.980240 ARP, Request who-has nfsv4-new3.home.rick tell =
192.168.1.254, length 46
>>>> 19:16:58.555663 ARP, Request who-has nfsv4-new3.home.rick tell =
192.168.1.254, length 46
>>>> 19:17:00.104701 IP nfsv4-new3.home.rick.nfsd > =
nfsv4-linux.home.rick.apex-mesh: Flags [F.], seq 202749, ack 212293, win =
29128, options [nop,nop,TS val 2074046846 ecr 2671204825], length 0
>>>> 19:17:15.664354 IP nfsv4-new3.home.rick.nfsd > =
nfsv4-linux.home.rick.apex-mesh: Flags [F.], seq 202749, ack 212293, win =
29128, options [nop,nop,TS val 2074062406 ecr 2671204825], length 0
>>>> 19:17:31.239246 IP nfsv4-new3.home.rick.nfsd > =
nfsv4-linux.home.rick.apex-mesh: Flags [R.], seq 202750, ack 212293, win =
0, options [nop,nop,TS val 2074077981 ecr 2671204825], length 0
>>>> *** FreeBSD finally acknowledges the RST 38sec after Linux sent the =
last
>>>> of 13 (100+ for another test run).
>>>>=20
>>>> 19:17:51.535979 IP nfsv4-linux.home.rick.apex-mesh > =
nfsv4-new3.home.rick.nfsd: Flags [S], seq 4247692373, win 64240, options =
[mss 1460,sackOK,TS val 2671667055 ecr 0,nop,wscale 7], length 0
>>>> 19:17:51.536130 IP nfsv4-new3.home.rick.nfsd > =
nfsv4-linux.home.rick.apex-mesh: Flags [S.], seq 661237469, ack =
4247692374, win 65535, options [mss 1460,nop,wscale 6,sackOK,TS val =
2074098278 ecr 2671667055], length 0
>>>> *** Now back in business...
>>>>=20
>>>> 19:17:51.536218 IP nfsv4-linux.home.rick.apex-mesh > =
nfsv4-new3.home.rick.nfsd: Flags [.], ack 1, win 502, options =
[nop,nop,TS val 2671667055 ecr 2074098278], length 0
>>>> 19:17:51.536295 IP nfsv4-linux.home.rick.apex-mesh > =
nfsv4-new3.home.rick.nfsd: Flags [P.], seq 1:233, ack 1, win 502, =
options [nop,nop,TS val 2671667056 ecr 2074098278], length 232: NFS =
request xid 629930901 228 getattr fh 0,1/53
>>>> 19:17:51.536346 IP nfsv4-linux.home.rick.apex-mesh > =
nfsv4-new3.home.rick.nfsd: Flags [P.], seq 233:505, ack 1, win 502, =
options [nop,nop,TS val 2671667056 ecr 2074098278], length 272: NFS =
request xid 697039765 132 getattr fh 0,1/53
>>>> 19:17:51.536515 IP nfsv4-new3.home.rick.nfsd > =
nfsv4-linux.home.rick.apex-mesh: Flags [.], ack 505, win 29128, options =
[nop,nop,TS val 2074098279 ecr 2671667056], length 0
>>>> 19:17:51.536553 IP nfsv4-linux.home.rick.apex-mesh > =
nfsv4-new3.home.rick.nfsd: Flags [P.], seq 505:641, ack 1, win 502, =
options [nop,nop,TS val 2671667056 ecr 2074098279], length 136: NFS =
request xid 730594197 132 getattr fh 0,1/53
>>>> 19:17:51.536562 IP nfsv4-new3.home.rick.nfsd > =
nfsv4-linux.home.rick.apex-mesh: Flags [P.], seq 1:49, ack 505, win =
29128, options [nop,nop,TS val 2074098279 ecr 2671667056], length 48: =
NFS reply xid 697039765 reply ok 44 getattr ERROR: unk 10063
>>>>=20
>>>> This error 10063 after the partition heals is also "bad news". It =
indicates the Session
>>>> (which is supposed to maintain "exactly once" RPC semantics is =
broken). I'll admit I
>>>> suspect a Linux client bug, but will be investigating further.
>>>>=20
>>>> So, hopefully TCP conversant folk can confirm if the above is =
correct behaviour
>>>> or if the RST should be ack'd sooner?
>>>>=20
>>>> I could also see this becoming a "forever" TCP battle for other =
versions of Linux client.
>>>>=20
>>>> rick
>>>>=20
>>>>=20
>>>> ________________________________________
>>>> From: Scheffenegger, Richard <Richard.Scheffenegger@netapp.com>
>>>> Sent: Sunday, April 4, 2021 7:50 AM
>>>> To: Rick Macklem; tuexen@freebsd.org
>>>> Cc: Youssef GHORBAL; freebsd-net@freebsd.org
>>>> Subject: Re: NFS Mount Hangs
>>>>=20
>>>> CAUTION: This email originated from outside of the University of =
Guelph. Do not click links or open attachments unless you recognize the =
sender and know the content is safe. If in doubt, forward suspicious =
emails to IThelp@uoguelph.ca
>>>>=20
>>>>=20
>>>> For what it=E2=80=98s worth, suse found two bugs in the linux =
nfconntrack (stateful firewall), and pfifo-fast scheduler, which could =
conspire to make tcp sessions hang forever.
>>>>=20
>>>> One is a missed updaten when the c=C3=B6ient is not using the =
noresvport moint option, which makes tje firewall think rsts are illegal =
(and drop them);
>>>>=20
>>>> The fast scheduler can run into an issue if only a single packet =
should be forwarded (note that this is not the default scheduler, but =
often recommended for perf, as it runs lockless and lower cpu cost that =
pfq (default). If no other/additional packet pushes out that last packet =
of a flow, it can become stuck forever...
>>>>=20
>>>> I can try getting the relevant bug info next week...
>>>>=20
>>>> ________________________________
>>>> Von: owner-freebsd-net@freebsd.org <owner-freebsd-net@freebsd.org> =
im Auftrag von Rick Macklem <rmacklem@uoguelph.ca>
>>>> Gesendet: Friday, April 2, 2021 11:31:01 PM
>>>> An: tuexen@freebsd.org <tuexen@freebsd.org>
>>>> Cc: Youssef GHORBAL <youssef.ghorbal@pasteur.fr>; =
freebsd-net@freebsd.org <freebsd-net@freebsd.org>
>>>> Betreff: Re: NFS Mount Hangs
>>>>=20
>>>> NetApp Security WARNING: This is an external email. Do not click =
links or open attachments unless you recognize the sender and know the =
content is safe.
>>>>=20
>>>>=20
>>>>=20
>>>>=20
>>>> tuexen@freebsd.org wrote:
>>>>>> On 2. Apr 2021, at 02:07, Rick Macklem <rmacklem@uoguelph.ca> =
wrote:
>>>>>>=20
>>>>>> I hope you don't mind a top post...
>>>>>> I've been testing network partitioning between the only Linux =
client
>>>>>> I have (5.2 kernel) and a FreeBSD server with the xprtdied.patch
>>>>>> (does soshutdown(..SHUT_WR) when it knows the socket is broken)
>>>>>> applied to it.
>>>>>>=20
>>>>>> I'm not enough of a TCP guy to know if this is useful, but here's =
what
>>>>>> I see...
>>>>>>=20
>>>>>> While partitioned:
>>>>>> On the FreeBSD server end, the socket either goes to CLOSED =
during
>>>>>> the network partition or stays ESTABLISHED.
>>>>> If it goes to CLOSED you called shutdown(, SHUT_WR) and the peer =
also
>>>>> sent a FIN, but you never called close() on the socket.
>>>>> If the socket stays in ESTABLISHED, there is no communication =
ongoing,
>>>>> I guess, and therefore the server does not even detect that the =
peer
>>>>> is not reachable.
>>>>>> On the Linux end, the socket seems to remain ESTABLISHED for a
>>>>>> little while, and then disappears.
>>>>> So how does Linux detect the peer is not reachable?
>>>> Well, here's what I see in a packet capture in the Linux client =
once
>>>> I partition it (just unplug the net cable):
>>>> - lots of retransmits of the same segment (with ACK) for 54sec
>>>> - then only ARP queries
>>>>=20
>>>> Once I plug the net cable back in:
>>>> - ARP works
>>>> - one more retransmit of the same segement
>>>> - receives RST from FreeBSD
>>>> ** So, is this now a "new" TCP connection, despite
>>>> using the same port#.
>>>> --> It matters for NFS, since "new connection"
>>>>        implies "must retry all outstanding RPCs".
>>>> - sends SYN
>>>> - receives SYN, ACK from FreeBSD
>>>> --> connection starts working again
>>>> Always uses same port#.
>>>>=20
>>>> On the FreeBSD server end:
>>>> - receives the last retransmit of the segment (with ACK)
>>>> - sends RST
>>>> - receives SYN
>>>> - sends SYN, ACK
>>>>=20
>>>> I thought that there was no RST in the capture I looked at
>>>> yesterday, so I'm not sure if FreeBSD always sends an RST,
>>>> but the Linux client behaviour was the same. (Sent a SYN, etc).
>>>> The socket disappears from the Linux "netstat -a" and I
>>>> suspect that happens after about 54sec, but I am not sure
>>>> about the timing.
>>>>=20
>>>>>>=20
>>>>>> After unpartitioning:
>>>>>> On the FreeBSD server end, you get another socket showing up at
>>>>>> the same port#
>>>>>> Active Internet connections (including servers)
>>>>>> Proto Recv-Q Send-Q Local Address          Foreign Address        =
(state)
>>>>>> tcp4       0      0 nfsv4-new3.nfsd        nfsv4-linux.678        =
ESTABLISHED
>>>>>> tcp4       0      0 nfsv4-new3.nfsd        nfsv4-linux.678        =
CLOSED
>>>>>>=20
>>>>>> The Linux client shows the same connection ESTABLISHED.
>>>> But disappears from "netstat -a" for a while during the =
partitioning.
>>>>=20
>>>>>> (The mount sometimes reports an error. I haven't looked at packet
>>>>>> traces to see if it retries RPCs or why the errors occur.)
>>>> I have now done so, as above.
>>>>=20
>>>>>> --> However I never get hangs.
>>>>>> Sometimes it goes to SYN_SENT for a while and the FreeBSD server
>>>>>> shows FIN_WAIT_1, but then both ends go to ESTABLISHED and the
>>>>>> mount starts working again.
>>>>>>=20
>>>>>> The most obvious thing is that the Linux client always keeps =
using
>>>>>> the same port#. (The FreeBSD client will use a different port# =
when
>>>>>> it does a TCP reconnect after no response from the NFS server for
>>>>>> a little while.)
>>>>>>=20
>>>>>> What do those TCP conversant think?
>>>>> I guess you are you are never calling close() on the socket, for =
with
>>>>> the connection state is CLOSED.
>>>> Ok, that makes sense. For this case the Linux client has not done a
>>>> BindConnectionToSession to re-assign the back channel.
>>>> I'll have to bug them about this. However, I'll bet they'll answer
>>>> that I have to tell them the back channel needs re-assignment
>>>> or something like that.
>>>>=20
>>>> I am pretty certain they are broken, in that the client needs to
>>>> retry all outstanding RPCs.
>>>>=20
>>>> For others, here's the long winded version of this that I just
>>>> put on the phabricator review:
>>>> In the server side kernel RPC, the socket (struct socket *) is in a
>>>> structure called SVCXPRT (normally pointed to by "xprt").
>>>> These structures a ref counted and the soclose() is done
>>>> when the ref. cnt goes to zero. My understanding is that
>>>> "struct socket *" is free'd by soclose() so this cannot be done
>>>> before the xprt ref. cnt goes to zero.
>>>>=20
>>>> For NFSv4.1/4.2 there is something called a back channel
>>>> which means that a "xprt" is used for server->client RPCs,
>>>> although the TCP connection is established by the client
>>>> to the server.
>>>> --> This back channel holds a ref cnt on "xprt" until the
>>>>=20
>>>>  client re-assigns it to a different TCP connection
>>>>  via an operation called BindConnectionToSession
>>>>  and the Linux client is not doing this soon enough,
>>>> it appears.
>>>>=20
>>>> So, the soclose() is delayed, which is why I think the
>>>> TCP connection gets stuck in CLOSE_WAIT and that is
>>>> why I've added the soshutdown(..SHUT_WR) calls,
>>>> which can happen before the client gets around to
>>>> re-assigning the back channel.
>>>>=20
>>>> Thanks for your help with this Michael, rick
>>>>=20
>>>> Best regards
>>>> Michael
>>>>>=20
>>>>> rick
>>>>> ps: I can capture packets while doing this, if anyone has a use
>>>>>  for them.
>>>>>=20
>>>>>=20
>>>>>=20
>>>>>=20
>>>>>=20
>>>>>=20
>>>>> ________________________________________
>>>>> From: owner-freebsd-net@freebsd.org =
<owner-freebsd-net@freebsd.org> on behalf of Youssef  GHORBAL =
<youssef.ghorbal@pasteur.fr>
>>>>> Sent: Saturday, March 27, 2021 6:57 PM
>>>>> To: Jason Breitman
>>>>> Cc: Rick Macklem; freebsd-net@freebsd.org
>>>>> Subject: Re: NFS Mount Hangs
>>>>>=20
>>>>> CAUTION: This email originated from outside of the University of =
Guelph. Do not click links or open attachments unless you recognize the =
sender and know the content is safe. If in doubt, forward suspicious =
emails to IThelp@uoguelph.ca
>>>>>=20
>>>>>=20
>>>>>=20
>>>>>=20
>>>>> On 27 Mar 2021, at 13:20, Jason Breitman =
<jbreitman@tildenparkcapital.com<mailto:jbreitman@tildenparkcapital.com>> =
wrote:
>>>>>=20
>>>>> The issue happened again so we can say that disabling TSO and LRO =
on the NIC did not resolve this issue.
>>>>> # ifconfig lagg0 -rxcsum -rxcsum6 -txcsum -txcsum6 -lro -tso =
-vlanhwtso
>>>>> # ifconfig lagg0
>>>>> lagg0: flags=3D8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> =
metric 0 mtu 1500
>>>>>    =
options=3D8100b8<VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,VLAN_HWFILT=
ER>
>>>>>=20
>>>>> We can also say that the sysctl settings did not resolve this =
issue.
>>>>>=20
>>>>> # sysctl net.inet.tcp.fast_finwait2_recycle=3D1
>>>>> net.inet.tcp.fast_finwait2_recycle: 0 -> 1
>>>>>=20
>>>>> # sysctl net.inet.tcp.finwait2_timeout=3D1000
>>>>> net.inet.tcp.finwait2_timeout: 60000 -> 1000
>>>>>=20
>>>>> I don=E2=80=99t think those will do anything in your case since =
the FIN_WAIT2 are on the client side and those sysctls are for BSD.
>>>>> By the way it seems that Linux recycles automatically TCP sessions =
in FIN_WAIT2 after 60 seconds (sysctl net.ipv4.tcp_fin_timeout)
>>>>>=20
>>>>> tcp_fin_timeout (integer; default: 60; since Linux 2.2)
>>>>>          This specifies how many seconds to wait for a final FIN
>>>>>          packet before the socket is forcibly closed.  This is
>>>>>          strictly a violation of the TCP specification, but
>>>>>          required to prevent denial-of-service attacks.  In Linux
>>>>>          2.2, the default value was 180.
>>>>>=20
>>>>> So I don=E2=80=99t get why it stucks in the FIN_WAIT2 state =
anyway.
>>>>>=20
>>>>> You really need to have a packet capture during the outage (client =
and server side) so you=E2=80=99ll get over the wire chat and start =
speculating from there.
>>>>> No need to capture the beginning of the outage for now. All you =
have to do, is run a tcpdump for 10 minutes or so when you notice a =
client stuck.
>>>>>=20
>>>>> * I have not rebooted the NFS Server nor have I restarted nfsd, =
but do not believe that is required as these settings are at the TCP =
level and I would expect new sessions to use the updated settings.
>>>>>=20
>>>>> The issue occurred after 5 days following a reboot of the client =
machines.
>>>>> I ran the capture information again to make use of the situation.
>>>>>=20
>>>>> #!/bin/sh
>>>>>=20
>>>>> while true
>>>>> do
>>>>> /bin/date >> /tmp/nfs-hang.log
>>>>> /bin/ps axHl | grep nfsd | grep -v grep >> /tmp/nfs-hang.log
>>>>> /usr/bin/procstat -kk 2947 >> /tmp/nfs-hang.log
>>>>> /usr/bin/procstat -kk 2944 >> /tmp/nfs-hang.log
>>>>> /bin/sleep 60
>>>>> done
>>>>>=20
>>>>>=20
>>>>> On the NFS Server
>>>>> Active Internet connections (including servers)
>>>>> Proto Recv-Q Send-Q Local Address          Foreign Address        =
(state)
>>>>> tcp4       0      0 NFS.Server.IP.X.2049      =
NFS.Client.IP.X.48286     CLOSE_WAIT
>>>>>=20
>>>>> On the NFS Client
>>>>> tcp        0      0 NFS.Client.IP.X:48286      =
NFS.Server.IP.X:2049       FIN_WAIT2
>>>>>=20
>>>>>=20
>>>>>=20
>>>>> You had also asked for the output below.
>>>>>=20
>>>>> # nfsstat -E -s
>>>>> BackChannelCtBindConnToSes
>>>>>        0            0
>>>>>=20
>>>>> # sysctl vfs.nfsd.request_space_throttle_count
>>>>> vfs.nfsd.request_space_throttle_count: 0
>>>>>=20
>>>>> I see that you are testing a patch and I look forward to seeing =
the results.
>>>>>=20
>>>>>=20
>>>>> Jason Breitman
>>>>>=20
>>>>>=20
>>>>> On Mar 21, 2021, at 6:21 PM, Rick Macklem =
<rmacklem@uoguelph.ca<mailto:rmacklem@uoguelph.ca>> wrote:
>>>>>=20
>>>>> Youssef GHORBAL =
<youssef.ghorbal@pasteur.fr<mailto:youssef.ghorbal@pasteur.fr>> wrote:
>>>>>> Hi Jason,
>>>>>>=20
>>>>>>> On 17 Mar 2021, at 18:17, Jason Breitman =
<jbreitman@tildenparkcapital.com<mailto:jbreitman@tildenparkcapital.com>> =
wrote:
>>>>>>>=20
>>>>>>> Please review the details below and let me know if there is a =
setting that I should apply to my FreeBSD NFS Server or if there is a =
bug fix that I can apply to resolve my issue.
>>>>>>> I shared this information with the linux-nfs mailing list and =
they believe the issue is on the server side.
>>>>>>>=20
>>>>>>> Issue
>>>>>>> NFSv4 mounts periodically hang on the NFS Client.
>>>>>>>=20
>>>>>>> During this time, it is possible to manually mount from another =
NFS Server on the NFS Client having issues.
>>>>>>> Also, other NFS Clients are successfully mounting from the NFS =
Server in question.
>>>>>>> Rebooting the NFS Client appears to be the only solution.
>>>>>>=20
>>>>>> I had experienced a similar weird situation with periodically =
stuck Linux NFS clients >mounting Isilon NFS servers (Isilon is FreeBSD =
based but they seem to have there >own nfsd)
>>>>> Yes, my understanding is that Isilon uses a proprietary user space =
nfsd and
>>>>> not the kernel based RPC and nfsd in FreeBSD.
>>>>>=20
>>>>>> We=E2=80=99ve had better luck and we did manage to have packet =
captures on both sides >during the issue. The gist of it goes like =
follows:
>>>>>>=20
>>>>>> - Data flows correctly between SERVER and the CLIENT
>>>>>> - At some point SERVER starts decreasing it's TCP Receive Window =
until it reachs 0
>>>>>> - The client (eager to send data) can only ack data sent by =
SERVER.
>>>>>> - When SERVER was done sending data, the client starts sending =
TCP Window >Probes hoping that the TCP Window opens again so he can =
flush its buffers.
>>>>>> - SERVER responds with a TCP Zero Window to those probes.
>>>>> Having the window size drop to zero is not necessarily incorrect.
>>>>> If the server is overloaded (has a backlog of NFS requests), it =
can stop doing
>>>>> soreceive() on the socket (so the socket rcv buffer can fill up =
and the TCP window
>>>>> closes). This results in "backpressure" to stop the NFS client =
from flooding the
>>>>> NFS server with requests.
>>>>> --> However, once the backlog is handled, the nfsd should start to =
soreceive()
>>>>> again and this shouls cause the window to open back up.
>>>>> --> Maybe this is broken in the socket/TCP code. I quickly got =
lost in
>>>>> tcp_output() when it decides what to do about the rcvwin.
>>>>>=20
>>>>>> - After 6 minutes (the NFS server default Idle timeout) SERVER =
racefully closes the >TCP connection sending a FIN Packet (and still a =
TCP Window 0)
>>>>> This probably does not happen for Jason's case, since the 6minute =
timeout
>>>>> is disabled when the TCP connection is assigned as a backchannel =
(most likely
>>>>> the case for NFSv4.1).
>>>>>=20
>>>>>> - CLIENT ACK that FIN.
>>>>>> - SERVER goes in FIN_WAIT_2 state
>>>>>> - CLIENT closes its half part part of the socket and goes in =
LAST_ACK state.
>>>>>> - FIN is never sent by the client since there still data in its =
SendQ and receiver TCP >Window is still 0. At this stage the client =
starts sending TCP Window Probes again >and again hoping that the server =
opens its TCP Window so it can flush it's buffers >and terminate its =
side of the socket.
>>>>>> - SERVER keeps responding with a TCP Zero Window to those probes.
>>>>>> =3D> The last two steps goes on and on for hours/days freezing =
the NFS mount bound >to that TCP session.
>>>>>>=20
>>>>>> If we had a situation where CLIENT was responsible for closing =
the TCP Window (and >initiating the TCP FIN first) and server wanting to =
send data we=E2=80=99ll end up in the same >state as you I think.
>>>>>>=20
>>>>>> We=E2=80=99ve never had the root cause of why the SERVER decided =
to close the TCP >Window and no more acccept data, the fix on the Isilon =
part was to recycle more >aggressively the FIN_WAIT_2 sockets =
(net.inet.tcp.fast_finwait2_recycle=3D1 & =
>net.inet.tcp.finwait2_timeout=3D5000). Once the socket recycled and at =
the next >occurence of CLIENT TCP Window probe, SERVER sends a RST, =
triggering the >teardown of the session on the client side, a new TCP =
handchake, etc and traffic >flows again (NFS starts responding)
>>>>>>=20
>>>>>> To avoid rebooting the client (and before the aggressive =
FIN_WAIT_2 was >implemented on the Isilon side) we=E2=80=99ve added a =
check script on the client that detects >LAST_ACK sockets on the client =
and through iptables rule enforces a TCP RST, >Something like: -A OUTPUT =
-p tcp -d $nfs_server_addr --sport $local_port -j REJECT >--reject-with =
tcp-reset (the script removes this iptables rule as soon as the LAST_ACK =
>disappears)
>>>>>>=20
>>>>>> The bottom line would be to have a packet capture during the =
outage (client and/or >server side), it will show you at least the shape =
of the TCP exchange when NFS is >stuck.
>>>>> Interesting story and good work w.r.t. sluething, Youssef, thanks.
>>>>>=20
>>>>> I looked at Jason's log and it shows everything is ok w.r.t the =
nfsd threads.
>>>>> (They're just waiting for RPC requests.)
>>>>> However, I do now think I know why the soclose() does not happen.
>>>>> When the TCP connection is assigned as a backchannel, that takes a =
reference
>>>>> cnt on the structure. This refcnt won't be released until the =
connection is
>>>>> replaced by a BindConnectiotoSession operation from the client. =
But that won't
>>>>> happen until the client creates a new TCP connection.
>>>>> --> No refcnt release-->no refcnt of 0-->no soclose().
>>>>>=20
>>>>> I've created the attached patch (completely different from the =
previous one)
>>>>> that adds soshutdown(SHUT_WR) calls in the three places where the =
TCP
>>>>> connection is going away. This seems to get it past CLOSE_WAIT =
without a
>>>>> soclose().
>>>>> --> I know you are not comfortable with patching your server, but =
I do think
>>>>> this change will get the socket shutdown to complete.
>>>>>=20
>>>>> There are a couple more things you can check on the server...
>>>>> # nfsstat -E -s
>>>>> --> Look for the count under "BindConnToSes".
>>>>> --> If non-zero, backchannels have been assigned
>>>>> # sysctl -a | fgrep request_space_throttle_count
>>>>> --> If non-zero, the server has been overloaded at some point.
>>>>>=20
>>>>> I think the attached patch might work around the problem.
>>>>> The code that should open up the receive window needs to be =
checked.
>>>>> I am also looking at enabling the 6minute timeout when a =
backchannel is
>>>>> assigned.
>>>>>=20
>>>>> rick
>>>>>=20
>>>>> Youssef
>>>>>=20
>>>>> _______________________________________________
>>>>> freebsd-net@freebsd.org<mailto:freebsd-net@freebsd.org> mailing =
list
>>>>> =
https://urldefense.com/v3/__https://lists.freebsd.org/mailman/listinfo/fre=
ebsd-net__;!!JFdNOqOXpB6UZW0!_c2MFNbir59GXudWPVdE5bNBm-qqjXeBuJ2UEmFv5OZci=
Lj4ObR_drJNv5yryaERfIbhKR2d$
>>>>> To unsubscribe, send any mail to =
"freebsd-net-unsubscribe@freebsd.org<mailto:freebsd-net-unsubscribe@freebs=
d.org>"
>>>>> <xprtdied.patch>
>>>>>=20
>>>>> <nfs-hang.log.gz>
>>>>>=20
>>>>> _______________________________________________
>>>>> freebsd-net@freebsd.org mailing list
>>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-net
>>>>> To unsubscribe, send any mail to =
"freebsd-net-unsubscribe@freebsd.org"
>>>>> _______________________________________________
>>>>> freebsd-net@freebsd.org mailing list
>>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-net
>>>>> To unsubscribe, send any mail to =
"freebsd-net-unsubscribe@freebsd.org"
>>>>=20
>>>> _______________________________________________
>>>> freebsd-net@freebsd.org mailing list
>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-net
>>>> To unsubscribe, send any mail to =
"freebsd-net-unsubscribe@freebsd.org"
>>>> _______________________________________________
>>>> freebsd-net@freebsd.org mailing list
>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-net
>>>> To unsubscribe, send any mail to =
"freebsd-net-unsubscribe@freebsd.org"
>>>=20
>>=20
>=20
> _______________________________________________
> freebsd-net@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?E4A51EAD-8F9A-49BB-8852-F9D61BDD9EA4>