Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 10 Apr 2021 15:04:16 +0000
From:      Rick Macklem <rmacklem@uoguelph.ca>
To:        "tuexen@freebsd.org" <tuexen@freebsd.org>
Cc:        "Scheffenegger, Richard" <Richard.Scheffenegger@netapp.com>, Youssef GHORBAL <youssef.ghorbal@pasteur.fr>, "freebsd-net@freebsd.org" <freebsd-net@freebsd.org>
Subject:   Re: NFS Mount Hangs
Message-ID:  <YQXPR0101MB09682EC465091A378DD679CEDD729@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM>
In-Reply-To: <8B7C867D-54A5-4EFA-B5BC-CA63FFC1EA77@freebsd.org>
References:  <C643BB9C-6B61-4DAC-8CF9-CE04EA7292D0@tildenparkcapital.com> <3750001D-3F1C-4D9A-A9D9-98BCA6CA65A4@tildenparkcapital.com> <33693DE3-7FF8-4FAB-9A75-75576B88A566@tildenparkcapital.com> <D67AF317-D238-4EC0-8C7F-22D54AD5144C@pasteur.fr> <YQXPR0101MB09684AB7BEFA911213604467DD669@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM> <C87066D3-BBF1-44E1-8398-E4EB6903B0F2@tildenparkcapital.com> <8E745920-1092-4312-B251-B49D11FE8028@pasteur.fr> <YQXPR0101MB0968C44C7C82A3EB64F384D0DD7B9@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM> <DEF8564D-0FE9-4C2C-9F3B-9BCDD423377C@freebsd.org> <YQXPR0101MB0968E0A17D8BCACFAF132225DD7A9@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM> <SN4PR0601MB3728E392BCA494EAD49605FE86789@SN4PR0601MB3728.namprd06.prod.outlook.com> <YQXPR0101MB09686B4F921B96DCAFEBF874DD789@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM> <765CE1CD-6AAB-4BEF-97C6-C2A1F0FF4AC5@freebsd.org> <YQXPR0101MB096876B44F33BAD8991B62C8DD789@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM> <2B189169-C0C9-4DE6-A01A-BE916F10BABA@freebsd.org> <YQXPR0101MB09688645194907BBAA6E7C7ADD789@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM> <BF5D23D3-5DBD-4E29-9C6B-F4CCDC205353@freebsd.org> <YQXPR0101MB096826445C85921C8F6410A2DD779@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM> <E4A51EAD-8F9A-49BB-8852-F9D61BDD9EA4@freebsd.org> <YQXPR0101MB09682F230F25FBF3BC427135DD729@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM>, <8B7C867D-54A5-4EFA-B5BC-CA63FFC1EA77@freebsd.org>

next in thread | previous in thread | raw e-mail | index | archive | help
tuexen@freebsd.org wrote:=0A=
>> On 10. Apr 2021, at 02:44, Rick Macklem <rmacklem@uoguelph.ca> wrote:=0A=
>>=0A=
>> tuexen@freebsd.org wrote:=0A=
>>>> On 6. Apr 2021, at 01:24, Rick Macklem <rmacklem@uoguelph.ca> wrote:=
=0A=
>>>>=0A=
>>>> tuexen@freebsd.org wrote:=0A=
>>>> [stuff snipped]=0A=
>>>>> OK. What is the FreeBSD version you are using?=0A=
>>>> main Dec. 23, 2020.=0A=
>>>>=0A=
>>>>>=0A=
>>>>> It seems that the TCP connection on the FreeBSD is still alive,=0A=
>>>>> Linux has decided to start a new TCP connection using the old=0A=
>>>>> port numbers. So it sends a SYN. The response is a challenge ACK=0A=
>>>>> and Linux responds with a RST. This looks good so far. However,=0A=
>>>>> FreeBSD should accept the RST and kill the TCP connection. The=0A=
>>>>> next SYN from the Linux side would establish a new TCP connection.=0A=
>>>>>=0A=
>>>>> So I'm wondering why the RST is not accepted. I made the timestamp=0A=
>>>>> checking stricter but introduced a bug where RST segments without=0A=
>>>>> timestamps were ignored. This was fixed.=0A=
>>>>>=0A=
>>>>> Introduced in main on 2020/11/09:=0A=
>>>>> https://svnweb.freebsd.org/changeset/base/367530=0A=
>>>>> Introduced in stable/12 on 2020/11/30:=0A=
>>>>> https://svnweb.freebsd.org/changeset/base/36818=0A=
>>>>>> Fix in main on 2021/01/13:=0A=
>>>>> https://cgit.FreeBSD.org/src/commit/?id=3Dcc3c34859eab1b317d0f3873135=
5b53f7d978c97=0A=
>>>>> Fix in stable/12 on 2021/01/24:=0A=
>>>>> https://cgit.FreeBSD.org/src/commit/?id=3Dd05d908d6d3c85479c84c707f93=
1148439ae826b=0A=
>>>>>=0A=
>>>>> Are you using a version which is affected by this bug?=0A=
>>>> I was. Now I've applied the patch.=0A=
>>>> Bad News. It did not fix the problem.=0A=
>>>> It still gets into an endless "ignore RST" and stay established when=
=0A=
>>>> the Send-Q is empty.=0A=
>>> OK. Let us focus on this case.=0A=
>>>=0A=
>>> Could you:=0A=
>>> 1. sudo sysctl net.inet.tcp.log_debug=3D1=0A=
>>> 2. repeat the situation where RSTs are ignored.=0A=
>>> 3. check if there is some output on the console (/var/log/messages).=0A=
>>> 4. Either provide the output or let me know that there is none.=0A=
>> Well, I have some good news and some bad news (the bad is mostly for Ric=
hard).=0A=
>> The only message logged is:=0A=
>>  tcpflags 0x4<RST>; tcp_do_segment: Timestamp missing, segment processed=
 normally=0A=
>>=0A=
>> But...the RST battle no longer occurs. Just one RST that works and then=
=0A=
>> the SYN gets SYN,ACK'd by the FreeBSD end and off it goes...=0A=
>The above is what I would expect if you integrated cc3c34859eab1b317d0f387=
31355b53f7d978c97=0A=
>or reverted r367530. Did you do that?=0A=
r367530 is in the kernel that does not cause the "RST battle".=0A=
=0A=
>=0A=
>=0A=
> So, what is different?=0A=
>=0A=
> r367492 is reverted from the FreeBSD server.=0A=
Only that? So you still have the bug I introduced in tree, but the RST segm=
ent is accepted?=0A=
No. The kernel being tested includes the fix (you committed mid-January) fo=
r the bug=0A=
that went in in Nov.=0A=
However, adding the mid-January patch did not fix the problem.=0A=
Then reverting r367492 (and only r367492) made the problem go away.=0A=
=0A=
I did not expect reverting r367492 to affect this.=0A=
I reverted r367492 because otis@ gets Linux client mounts "stuck" against a=
 FreBSD13=0A=
NFS server, where the Recv-Q size grows and the client gets no RPC replies.=
 Other=0A=
clients are still working fine. I can only think of one explanations for th=
is:=0A=
- An upcall gets missed or occurs at the wrong time.=0A=
--> Since what this patch does is move where the upcalls is done, it is the=
 logical=0A=
      culprit.=0A=
      Hopefully otis@ will be able to determine if reverting r367492 fixes =
the problem.=0A=
      This will take weeks, since the problem recently took two weeks to re=
cur.=0A=
      --> This would be the receive path, so reverting the send path would =
not be=0A=
            relevant.=0A=
*** I'd like to hear from otis@ before testing a "send path only" revert.=
=0A=
--> Also, it has been a long time since I worked on the socket upcall code,=
 but I=0A=
      vaguely remember that the upcalls needed to be done before SOCKBUF_LO=
CK()=0A=
      is dropped to ensure that the socket buffer is in the expected state.=
=0A=
      r367492 drops SOCKBUF_LOCK() and then picks it up again for the upcal=
ls.=0A=
=0A=
I'll send you guys the otis@ problem email. (I don't think that one is cc'd=
 to a list.=0A=
=0A=
rick=0A=
=0A=
Best regards=0A=
Michael=0A=
> I did the revert because I think it might be what otis@ hang is being=0A=
> caused by. (In his case, the Recv-Q grows on the socket for the=0A=
> stuck Linux client, while others work.=0A=
>=0A=
> Why does reverting fix this?=0A=
> My only guess is that the krpc gets the upcall right away and sees=0A=
> a EPIPE when it does soreceive()->results in soshutdown(SHUT_WR).=0A=
> I know from a printf that this happened, but whether it caused the=0A=
> RST battle to not happen, I don't know.=0A=
>=0A=
> I can put r367492 back in and do more testing if you'd like, but=0A=
> I think it probably needs to be reverted?=0A=
>=0A=
> This does not explain the original hung Linux client problem,=0A=
> but does shed light on the RST war I could create by doing a=0A=
> network partitioning.=0A=
>=0A=
> rick=0A=
>=0A=
> Best regards=0A=
> Michael=0A=
>>=0A=
>> If the Send-Q is non-empty when I partition, it recovers fine,=0A=
>> sometimes not even needing to see an RST.=0A=
>>=0A=
>> rick=0A=
>> ps: If you think there might be other recent changes that matter,=0A=
>>   just say the word and I'll upgrade to bits de jur.=0A=
>>=0A=
>> rick=0A=
>>=0A=
>> Best regards=0A=
>> Michael=0A=
>>>=0A=
>>> If I wait long enough before healing the partition, it will=0A=
>>> go to FIN_WAIT_1, and then if I plug it back in, it does not=0A=
>>> do battle (at least not for long).=0A=
>>>=0A=
>>> Btw, I have one running now that seems stuck really good.=0A=
>>> It has been 20minutes since I plugged the net cable back in.=0A=
>>> (Unfortunately, I didn't have tcpdump running until after=0A=
>>> I saw it was not progressing after healing.=0A=
>>> --> There is one difference. There was a 6minute timeout=0A=
>>>     enabled on the server krpc for "no activity", which is=0A=
>>>     now disabled like it is for NFSv4.1 in freebsd-current.=0A=
>>>     I had forgotten to re-disable it.=0A=
>>> So, when it does battle, it might have been the 6minute=0A=
>>> timeout, which would then do the soshutdown(..SHUT_WR)=0A=
>>> which kept it from getting "stuck" forever.=0A=
>>> -->This time I had to reboot the FreeBSD NFS server to=0A=
>>>   get the Linux client unstuck, so this one looked a lot=0A=
>>>   like what has been reported.=0A=
>>> The pcap for this one, started after the network was plugged=0A=
>>> back in and I noticed it was stuck for quite a while is here:=0A=
>>> fetch https://people.freebsd.org/~rmacklem/stuck.pcap=0A=
>>>=0A=
>>> In it, there is just a bunch of RST followed by SYN sent=0A=
>>> from client->FreeBSD and FreeBSD just keeps sending=0A=
>>> acks for the old segment back.=0A=
>>> --> It looks like FreeBSD did the "RST, ACK" after the=0A=
>>>     krpc did a soshutdown(..SHUT_WR) on the socket,=0A=
>>>     for the one you've been looking at.=0A=
>>> I'll test some more...=0A=
>>>=0A=
>>>> I would like to understand why the reestablishment of the connection=
=0A=
>>>> did not work...=0A=
>>> It is looking like it takes either a non-empty send-q or a=0A=
>>> soshutdown(..SHUT_WR) to get the FreeBSD socket=0A=
>>> out of established, where it just ignores the RSTs and=0A=
>>> SYN packets.=0A=
>>>=0A=
>>> Thanks for looking at it, rick=0A=
>>>=0A=
>>> Best regards=0A=
>>> Michael=0A=
>>>>=0A=
>>>> Have fun with it, rick=0A=
>>>>=0A=
>>>>=0A=
>>>> ________________________________________=0A=
>>>> From: tuexen@freebsd.org <tuexen@freebsd.org>=0A=
>>>> Sent: Sunday, April 4, 2021 12:41 PM=0A=
>>>> To: Rick Macklem=0A=
>>>> Cc: Scheffenegger, Richard; Youssef GHORBAL; freebsd-net@freebsd.org=
=0A=
>>>> Subject: Re: NFS Mount Hangs=0A=
>>>>=0A=
>>>> CAUTION: This email originated from outside of the University of Guelp=
h. Do not click links or open attachments unless you recognize the sender a=
nd know the content is safe. If in doubt, forward suspicious emails to IThe=
lp@uoguelph.ca=0A=
>>>>=0A=
>>>>=0A=
>>>>> On 4. Apr 2021, at 17:27, Rick Macklem <rmacklem@uoguelph.ca> wrote:=
=0A=
>>>>>=0A=
>>>>> Well, I'm going to cheat and top post, since this is elated info. and=
=0A=
>>>>> not really part of the discussion...=0A=
>>>>>=0A=
>>>>> I've been testing network partitioning between a Linux client (5.2 ke=
rnel)=0A=
>>>>> and a FreeBSD-current NFS server. I have not gotten a solid hang, but=
=0A=
>>>>> I have had the Linux client doing "battle" with the FreeBSD server fo=
r=0A=
>>>>> several minutes after un-partitioning the connection.=0A=
>>>>>=0A=
>>>>> The battle basically consists of the Linux client sending an RST, fol=
lowed=0A=
>>>>> by a SYN.=0A=
>>>>> The FreeBSD server ignores the RST and just replies with the same old=
 ack.=0A=
>>>>> --> This varies from "just a SYN" that succeeds to 100+ cycles of the=
 above=0A=
>>>>>   over several minutes.=0A=
>>>>>=0A=
>>>>> I had thought that an RST was a "pretty heavy hammer", but FreeBSD se=
ems=0A=
>>>>> pretty good at ignoring it.=0A=
>>>>>=0A=
>>>>> A full packet capture of one of these is in /home/rmacklem/linuxtofre=
enfs.pcap=0A=
>>>>> in case anyone wants to look at it.=0A=
>>>> On freefall? I would like to take a look at it...=0A=
>>>>=0A=
>>>> Best regards=0A=
>>>> Michael=0A=
>>>>>=0A=
>>>>> Here's a tcpdump snippet of the interesting part (see the *** comment=
s):=0A=
>>>>> 19:10:09.305775 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.=
apex-mesh: Flags [P.], seq 202585:202749, ack 212293, win 29128, options [n=
op,nop,TS val 2073636037 ecr 2671204825], length 164: NFS reply xid 6131536=
85 reply ok 160 getattr NON 4 ids 0/33554432 sz 0=0A=
>>>>> 19:10:09.305850 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.=
rick.nfsd: Flags [.], ack 202749, win 501, options [nop,nop,TS val 26712048=
25 ecr 2073636037], length 0=0A=
>>>>> *** Network is now partitioned...=0A=
>>>>>=0A=
>>>>> 19:10:09.407840 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.=
rick.nfsd: Flags [P.], seq 212293:212525, ack 202749, win 501, options [nop=
,nop,TS val 2671204927 ecr 2073636037], length 232: NFS request xid 6299309=
01 228 getattr fh 0,1/53=0A=
>>>>> 19:10:09.615779 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.=
rick.nfsd: Flags [P.], seq 212293:212525, ack 202749, win 501, options [nop=
,nop,TS val 2671205135 ecr 2073636037], length 232: NFS request xid 6299309=
01 228 getattr fh 0,1/53=0A=
>>>>> 19:10:09.823780 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.=
rick.nfsd: Flags [P.], seq 212293:212525, ack 202749, win 501, options [nop=
,nop,TS val 2671205343 ecr 2073636037], length 232: NFS request xid 6299309=
01 228 getattr fh 0,1/53=0A=
>>>>> *** Lots of lines snipped.=0A=
>>>>>=0A=
>>>>>=0A=
>>>>> 19:13:41.295783 ARP, Request who-has nfsv4-new3.home.rick tell nfsv4-=
linux.home.rick, length 28=0A=
>>>>> 19:13:42.319767 ARP, Request who-has nfsv4-new3.home.rick tell nfsv4-=
linux.home.rick, length 28=0A=
>>>>> 19:13:46.351966 ARP, Request who-has nfsv4-new3.home.rick tell nfsv4-=
linux.home.rick, length 28=0A=
>>>>> 19:13:47.375790 ARP, Request who-has nfsv4-new3.home.rick tell nfsv4-=
linux.home.rick, length 28=0A=
>>>>> 19:13:48.399786 ARP, Request who-has nfsv4-new3.home.rick tell nfsv4-=
linux.home.rick, length 28=0A=
>>>>> *** Network is now unpartitioned...=0A=
>>>>>=0A=
>>>>> 19:13:48.399990 ARP, Reply nfsv4-new3.home.rick is-at d4:be:d9:07:81:=
72 (oui Unknown), length 46=0A=
>>>>> 19:13:48.400002 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.=
rick.nfsd: Flags [S], seq 416692300, win 64240, options [mss 1460,sackOK,TS=
 val 2671421871 ecr 0,nop,wscale 7], length 0=0A=
>>>>> 19:13:48.400185 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.=
apex-mesh: Flags [.], ack 212293, win 29127, options [nop,nop,TS val 207385=
5137 ecr 2671204825], length 0=0A=
>>>>> 19:13:48.400273 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.=
rick.nfsd: Flags [R], seq 964161458, win 0, length 0=0A=
>>>>> 19:13:49.423833 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.=
rick.nfsd: Flags [S], seq 416692300, win 64240, options [mss 1460,sackOK,TS=
 val 2671424943 ecr 0,nop,wscale 7], length 0=0A=
>>>>> 19:13:49.424056 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.=
apex-mesh: Flags [.], ack 212293, win 29127, options [nop,nop,TS val 207385=
6161 ecr 2671204825], length 0=0A=
>>>>> *** This "battle" goes on for 223sec...=0A=
>>>>> I snipped out 13 cycles of this "Linux sends an RST, followed by SYN"=
=0A=
>>>>> "FreeBSD replies with same old ACK". In another test run I saw this=
=0A=
>>>>> cycle continue non-stop for several minutes. This time, the Linux=0A=
>>>>> client paused for a while (see ARPs below).=0A=
>>>>>=0A=
>>>>> 19:13:49.424101 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.=
rick.nfsd: Flags [R], seq 964161458, win 0, length 0=0A=
>>>>> 19:13:53.455867 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.=
rick.nfsd: Flags [S], seq 416692300, win 64240, options [mss 1460,sackOK,TS=
 val 2671428975 ecr 0,nop,wscale 7], length 0=0A=
>>>>> 19:13:53.455991 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.=
apex-mesh: Flags [.], ack 212293, win 29127, options [nop,nop,TS val 207386=
0193 ecr 2671204825], length 0=0A=
>>>>> *** Snipped a bunch of stuff out, mostly ARPs, plus one more RST.=0A=
>>>>>=0A=
>>>>> 19:16:57.775780 ARP, Request who-has nfsv4-new3.home.rick tell nfsv4-=
linux.home.rick, length 28=0A=
>>>>> 19:16:57.775937 ARP, Reply nfsv4-new3.home.rick is-at d4:be:d9:07:81:=
72 (oui Unknown), length 46=0A=
>>>>> 19:16:57.980240 ARP, Request who-has nfsv4-new3.home.rick tell 192.16=
8.1.254, length 46=0A=
>>>>> 19:16:58.555663 ARP, Request who-has nfsv4-new3.home.rick tell 192.16=
8.1.254, length 46=0A=
>>>>> 19:17:00.104701 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.=
apex-mesh: Flags [F.], seq 202749, ack 212293, win 29128, options [nop,nop,=
TS val 2074046846 ecr 2671204825], length 0=0A=
>>>>> 19:17:15.664354 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.=
apex-mesh: Flags [F.], seq 202749, ack 212293, win 29128, options [nop,nop,=
TS val 2074062406 ecr 2671204825], length 0=0A=
>>>>> 19:17:31.239246 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.=
apex-mesh: Flags [R.], seq 202750, ack 212293, win 0, options [nop,nop,TS v=
al 2074077981 ecr 2671204825], length 0=0A=
>>>>> *** FreeBSD finally acknowledges the RST 38sec after Linux sent the l=
ast=0A=
>>>>> of 13 (100+ for another test run).=0A=
>>>>>=0A=
>>>>> 19:17:51.535979 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.=
rick.nfsd: Flags [S], seq 4247692373, win 64240, options [mss 1460,sackOK,T=
S val 2671667055 ecr 0,nop,wscale 7], length 0=0A=
>>>>> 19:17:51.536130 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.=
apex-mesh: Flags [S.], seq 661237469, ack 4247692374, win 65535, options [m=
ss 1460,nop,wscale 6,sackOK,TS val 2074098278 ecr 2671667055], length 0=0A=
>>>>> *** Now back in business...=0A=
>>>>>=0A=
>>>>> 19:17:51.536218 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.=
rick.nfsd: Flags [.], ack 1, win 502, options [nop,nop,TS val 2671667055 ec=
r 2074098278], length 0=0A=
>>>>> 19:17:51.536295 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.=
rick.nfsd: Flags [P.], seq 1:233, ack 1, win 502, options [nop,nop,TS val 2=
671667056 ecr 2074098278], length 232: NFS request xid 629930901 228 getatt=
r fh 0,1/53=0A=
>>>>> 19:17:51.536346 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.=
rick.nfsd: Flags [P.], seq 233:505, ack 1, win 502, options [nop,nop,TS val=
 2671667056 ecr 2074098278], length 272: NFS request xid 697039765 132 geta=
ttr fh 0,1/53=0A=
>>>>> 19:17:51.536515 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.=
apex-mesh: Flags [.], ack 505, win 29128, options [nop,nop,TS val 207409827=
9 ecr 2671667056], length 0=0A=
>>>>> 19:17:51.536553 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.=
rick.nfsd: Flags [P.], seq 505:641, ack 1, win 502, options [nop,nop,TS val=
 2671667056 ecr 2074098279], length 136: NFS request xid 730594197 132 geta=
ttr fh 0,1/53=0A=
>>>>> 19:17:51.536562 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.=
apex-mesh: Flags [P.], seq 1:49, ack 505, win 29128, options [nop,nop,TS va=
l 2074098279 ecr 2671667056], length 48: NFS reply xid 697039765 reply ok 4=
4 getattr ERROR: unk 10063=0A=
>>>>>=0A=
>>>>> This error 10063 after the partition heals is also "bad news". It ind=
icates the Session=0A=
>>>>> (which is supposed to maintain "exactly once" RPC semantics is broken=
). I'll admit I=0A=
>>>>> suspect a Linux client bug, but will be investigating further.=0A=
>>>>>=0A=
>>>>> So, hopefully TCP conversant folk can confirm if the above is correct=
 behaviour=0A=
>>>>> or if the RST should be ack'd sooner?=0A=
>>>>>=0A=
>>>>> I could also see this becoming a "forever" TCP battle for other versi=
ons of Linux client.=0A=
>>>>>=0A=
>>>>> rick=0A=
>>>>>=0A=
>>>>>=0A=
>>>>> ________________________________________=0A=
>>>>> From: Scheffenegger, Richard <Richard.Scheffenegger@netapp.com>=0A=
>>>>> Sent: Sunday, April 4, 2021 7:50 AM=0A=
>>>>> To: Rick Macklem; tuexen@freebsd.org=0A=
>>>>> Cc: Youssef GHORBAL; freebsd-net@freebsd.org=0A=
>>>>> Subject: Re: NFS Mount Hangs=0A=
>>>>>=0A=
>>>>> CAUTION: This email originated from outside of the University of Guel=
ph. Do not click links or open attachments unless you recognize the sender =
and know the content is safe. If in doubt, forward suspicious emails to ITh=
elp@uoguelph.ca=0A=
>>>>>=0A=
>>>>>=0A=
>>>>> For what it=91s worth, suse found two bugs in the linux nfconntrack (=
stateful firewall), and pfifo-fast scheduler, which could conspire to make =
tcp sessions hang forever.=0A=
>>>>>=0A=
>>>>> One is a missed updaten when the c=F6ient is not using the noresvport=
 moint option, which makes tje firewall think rsts are illegal (and drop th=
em);=0A=
>>>>>=0A=
>>>>> The fast scheduler can run into an issue if only a single packet shou=
ld be forwarded (note that this is not the default scheduler, but often rec=
ommended for perf, as it runs lockless and lower cpu cost that pfq (default=
). If no other/additional packet pushes out that last packet of a flow, it =
can become stuck forever...=0A=
>>>>>=0A=
>>>>> I can try getting the relevant bug info next week...=0A=
>>>>>=0A=
>>>>> ________________________________=0A=
>>>>> Von: owner-freebsd-net@freebsd.org <owner-freebsd-net@freebsd.org> im=
 Auftrag von Rick Macklem <rmacklem@uoguelph.ca>=0A=
>>>>> Gesendet: Friday, April 2, 2021 11:31:01 PM=0A=
>>>>> An: tuexen@freebsd.org <tuexen@freebsd.org>=0A=
>>>>> Cc: Youssef GHORBAL <youssef.ghorbal@pasteur.fr>; freebsd-net@freebsd=
.org <freebsd-net@freebsd.org>=0A=
>>>>> Betreff: Re: NFS Mount Hangs=0A=
>>>>>=0A=
>>>>> NetApp Security WARNING: This is an external email. Do not click link=
s or open attachments unless you recognize the sender and know the content =
is safe.=0A=
>>>>>=0A=
>>>>>=0A=
>>>>>=0A=
>>>>>=0A=
>>>>> tuexen@freebsd.org wrote:=0A=
>>>>>>> On 2. Apr 2021, at 02:07, Rick Macklem <rmacklem@uoguelph.ca> wrote=
:=0A=
>>>>>>>=0A=
>>>>>>> I hope you don't mind a top post...=0A=
>>>>>>> I've been testing network partitioning between the only Linux clien=
t=0A=
>>>>>>> I have (5.2 kernel) and a FreeBSD server with the xprtdied.patch=0A=
>>>>>>> (does soshutdown(..SHUT_WR) when it knows the socket is broken)=0A=
>>>>>>> applied to it.=0A=
>>>>>>>=0A=
>>>>>>> I'm not enough of a TCP guy to know if this is useful, but here's w=
hat=0A=
>>>>>>> I see...=0A=
>>>>>>>=0A=
>>>>>>> While partitioned:=0A=
>>>>>>> On the FreeBSD server end, the socket either goes to CLOSED during=
=0A=
>>>>>>> the network partition or stays ESTABLISHED.=0A=
>>>>>> If it goes to CLOSED you called shutdown(, SHUT_WR) and the peer als=
o=0A=
>>>>>> sent a FIN, but you never called close() on the socket.=0A=
>>>>>> If the socket stays in ESTABLISHED, there is no communication ongoin=
g,=0A=
>>>>>> I guess, and therefore the server does not even detect that the peer=
=0A=
>>>>>> is not reachable.=0A=
>>>>>>> On the Linux end, the socket seems to remain ESTABLISHED for a=0A=
>>>>>>> little while, and then disappears.=0A=
>>>>>> So how does Linux detect the peer is not reachable?=0A=
>>>>> Well, here's what I see in a packet capture in the Linux client once=
=0A=
>>>>> I partition it (just unplug the net cable):=0A=
>>>>> - lots of retransmits of the same segment (with ACK) for 54sec=0A=
>>>>> - then only ARP queries=0A=
>>>>>=0A=
>>>>> Once I plug the net cable back in:=0A=
>>>>> - ARP works=0A=
>>>>> - one more retransmit of the same segement=0A=
>>>>> - receives RST from FreeBSD=0A=
>>>>> ** So, is this now a "new" TCP connection, despite=0A=
>>>>> using the same port#.=0A=
>>>>> --> It matters for NFS, since "new connection"=0A=
>>>>>       implies "must retry all outstanding RPCs".=0A=
>>>>> - sends SYN=0A=
>>>>> - receives SYN, ACK from FreeBSD=0A=
>>>>> --> connection starts working again=0A=
>>>>> Always uses same port#.=0A=
>>>>>=0A=
>>>>> On the FreeBSD server end:=0A=
>>>>> - receives the last retransmit of the segment (with ACK)=0A=
>>>>> - sends RST=0A=
>>>>> - receives SYN=0A=
>>>>> - sends SYN, ACK=0A=
>>>>>=0A=
>>>>> I thought that there was no RST in the capture I looked at=0A=
>>>>> yesterday, so I'm not sure if FreeBSD always sends an RST,=0A=
>>>>> but the Linux client behaviour was the same. (Sent a SYN, etc).=0A=
>>>>> The socket disappears from the Linux "netstat -a" and I=0A=
>>>>> suspect that happens after about 54sec, but I am not sure=0A=
>>>>> about the timing.=0A=
>>>>>=0A=
>>>>>>>=0A=
>>>>>>> After unpartitioning:=0A=
>>>>>>> On the FreeBSD server end, you get another socket showing up at=0A=
>>>>>>> the same port#=0A=
>>>>>>> Active Internet connections (including servers)=0A=
>>>>>>> Proto Recv-Q Send-Q Local Address          Foreign Address        (=
state)=0A=
>>>>>>> tcp4       0      0 nfsv4-new3.nfsd        nfsv4-linux.678        E=
STABLISHED=0A=
>>>>>>> tcp4       0      0 nfsv4-new3.nfsd        nfsv4-linux.678        C=
LOSED=0A=
>>>>>>>=0A=
>>>>>>> The Linux client shows the same connection ESTABLISHED.=0A=
>>>>> But disappears from "netstat -a" for a while during the partitioning.=
=0A=
>>>>>=0A=
>>>>>>> (The mount sometimes reports an error. I haven't looked at packet=
=0A=
>>>>>>> traces to see if it retries RPCs or why the errors occur.)=0A=
>>>>> I have now done so, as above.=0A=
>>>>>=0A=
>>>>>>> --> However I never get hangs.=0A=
>>>>>>> Sometimes it goes to SYN_SENT for a while and the FreeBSD server=0A=
>>>>>>> shows FIN_WAIT_1, but then both ends go to ESTABLISHED and the=0A=
>>>>>>> mount starts working again.=0A=
>>>>>>>=0A=
>>>>>>> The most obvious thing is that the Linux client always keeps using=
=0A=
>>>>>>> the same port#. (The FreeBSD client will use a different port# when=
=0A=
>>>>>>> it does a TCP reconnect after no response from the NFS server for=
=0A=
>>>>>>> a little while.)=0A=
>>>>>>>=0A=
>>>>>>> What do those TCP conversant think?=0A=
>>>>>> I guess you are you are never calling close() on the socket, for wit=
h=0A=
>>>>>> the connection state is CLOSED.=0A=
>>>>> Ok, that makes sense. For this case the Linux client has not done a=
=0A=
>>>>> BindConnectionToSession to re-assign the back channel.=0A=
>>>>> I'll have to bug them about this. However, I'll bet they'll answer=0A=
>>>>> that I have to tell them the back channel needs re-assignment=0A=
>>>>> or something like that.=0A=
>>>>>=0A=
>>>>> I am pretty certain they are broken, in that the client needs to=0A=
>>>>> retry all outstanding RPCs.=0A=
>>>>>=0A=
>>>>> For others, here's the long winded version of this that I just=0A=
>>>>> put on the phabricator review:=0A=
>>>>> In the server side kernel RPC, the socket (struct socket *) is in a=
=0A=
>>>>> structure called SVCXPRT (normally pointed to by "xprt").=0A=
>>>>> These structures a ref counted and the soclose() is done=0A=
>>>>> when the ref. cnt goes to zero. My understanding is that=0A=
>>>>> "struct socket *" is free'd by soclose() so this cannot be done=0A=
>>>>> before the xprt ref. cnt goes to zero.=0A=
>>>>>=0A=
>>>>> For NFSv4.1/4.2 there is something called a back channel=0A=
>>>>> which means that a "xprt" is used for server->client RPCs,=0A=
>>>>> although the TCP connection is established by the client=0A=
>>>>> to the server.=0A=
>>>>> --> This back channel holds a ref cnt on "xprt" until the=0A=
>>>>>=0A=
>>>>> client re-assigns it to a different TCP connection=0A=
>>>>> via an operation called BindConnectionToSession=0A=
>>>>> and the Linux client is not doing this soon enough,=0A=
>>>>> it appears.=0A=
>>>>>=0A=
>>>>> So, the soclose() is delayed, which is why I think the=0A=
>>>>> TCP connection gets stuck in CLOSE_WAIT and that is=0A=
>>>>> why I've added the soshutdown(..SHUT_WR) calls,=0A=
>>>>> which can happen before the client gets around to=0A=
>>>>> re-assigning the back channel.=0A=
>>>>>=0A=
>>>>> Thanks for your help with this Michael, rick=0A=
>>>>>=0A=
>>>>> Best regards=0A=
>>>>> Michael=0A=
>>>>>>=0A=
>>>>>> rick=0A=
>>>>>> ps: I can capture packets while doing this, if anyone has a use=0A=
>>>>>> for them.=0A=
>>>>>>=0A=
>>>>>>=0A=
>>>>>>=0A=
>>>>>>=0A=
>>>>>>=0A=
>>>>>>=0A=
>>>>>> ________________________________________=0A=
>>>>>> From: owner-freebsd-net@freebsd.org <owner-freebsd-net@freebsd.org> =
on behalf of Youssef  GHORBAL <youssef.ghorbal@pasteur.fr>=0A=
>>>>>> Sent: Saturday, March 27, 2021 6:57 PM=0A=
>>>>>> To: Jason Breitman=0A=
>>>>>> Cc: Rick Macklem; freebsd-net@freebsd.org=0A=
>>>>>> Subject: Re: NFS Mount Hangs=0A=
>>>>>>=0A=
>>>>>> CAUTION: This email originated from outside of the University of Gue=
lph. Do not click links or open attachments unless you recognize the sender=
 and know the content is safe. If in doubt, forward suspicious emails to IT=
help@uoguelph.ca=0A=
>>>>>>=0A=
>>>>>>=0A=
>>>>>>=0A=
>>>>>>=0A=
>>>>>> On 27 Mar 2021, at 13:20, Jason Breitman <jbreitman@tildenparkcapita=
l.com<mailto:jbreitman@tildenparkcapital.com>> wrote:=0A=
>>>>>>=0A=
>>>>>> The issue happened again so we can say that disabling TSO and LRO on=
 the NIC did not resolve this issue.=0A=
>>>>>> # ifconfig lagg0 -rxcsum -rxcsum6 -txcsum -txcsum6 -lro -tso -vlanhw=
tso=0A=
>>>>>> # ifconfig lagg0=0A=
>>>>>> lagg0: flags=3D8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> =
metric 0 mtu 1500=0A=
>>>>>>   options=3D8100b8<VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,VLA=
N_HWFILTER>=0A=
>>>>>>=0A=
>>>>>> We can also say that the sysctl settings did not resolve this issue.=
=0A=
>>>>>>=0A=
>>>>>> # sysctl net.inet.tcp.fast_finwait2_recycle=3D1=0A=
>>>>>> net.inet.tcp.fast_finwait2_recycle: 0 -> 1=0A=
>>>>>>=0A=
>>>>>> # sysctl net.inet.tcp.finwait2_timeout=3D1000=0A=
>>>>>> net.inet.tcp.finwait2_timeout: 60000 -> 1000=0A=
>>>>>>=0A=
>>>>>> I don=92t think those will do anything in your case since the FIN_WA=
IT2 are on the client side and those sysctls are for BSD.=0A=
>>>>>> By the way it seems that Linux recycles automatically TCP sessions i=
n FIN_WAIT2 after 60 seconds (sysctl net.ipv4.tcp_fin_timeout)=0A=
>>>>>>=0A=
>>>>>> tcp_fin_timeout (integer; default: 60; since Linux 2.2)=0A=
>>>>>>         This specifies how many seconds to wait for a final FIN=0A=
>>>>>>         packet before the socket is forcibly closed.  This is=0A=
>>>>>>         strictly a violation of the TCP specification, but=0A=
>>>>>>         required to prevent denial-of-service attacks.  In Linux=0A=
>>>>>>         2.2, the default value was 180.=0A=
>>>>>>=0A=
>>>>>> So I don=92t get why it stucks in the FIN_WAIT2 state anyway.=0A=
>>>>>>=0A=
>>>>>> You really need to have a packet capture during the outage (client a=
nd server side) so you=92ll get over the wire chat and start speculating fr=
om there.=0A=
>>>>>> No need to capture the beginning of the outage for now. All you have=
 to do, is run a tcpdump for 10 minutes or so when you notice a client stuc=
k.=0A=
>>>>>>=0A=
>>>>>> * I have not rebooted the NFS Server nor have I restarted nfsd, but =
do not believe that is required as these settings are at the TCP level and =
I would expect new sessions to use the updated settings.=0A=
>>>>>>=0A=
>>>>>> The issue occurred after 5 days following a reboot of the client mac=
hines.=0A=
>>>>>> I ran the capture information again to make use of the situation.=0A=
>>>>>>=0A=
>>>>>> #!/bin/sh=0A=
>>>>>>=0A=
>>>>>> while true=0A=
>>>>>> do=0A=
>>>>>> /bin/date >> /tmp/nfs-hang.log=0A=
>>>>>> /bin/ps axHl | grep nfsd | grep -v grep >> /tmp/nfs-hang.log=0A=
>>>>>> /usr/bin/procstat -kk 2947 >> /tmp/nfs-hang.log=0A=
>>>>>> /usr/bin/procstat -kk 2944 >> /tmp/nfs-hang.log=0A=
>>>>>> /bin/sleep 60=0A=
>>>>>> done=0A=
>>>>>>=0A=
>>>>>>=0A=
>>>>>> On the NFS Server=0A=
>>>>>> Active Internet connections (including servers)=0A=
>>>>>> Proto Recv-Q Send-Q Local Address          Foreign Address        (s=
tate)=0A=
>>>>>> tcp4       0      0 NFS.Server.IP.X.2049      NFS.Client.IP.X.48286 =
    CLOSE_WAIT=0A=
>>>>>>=0A=
>>>>>> On the NFS Client=0A=
>>>>>> tcp        0      0 NFS.Client.IP.X:48286      NFS.Server.IP.X:2049 =
      FIN_WAIT2=0A=
>>>>>>=0A=
>>>>>>=0A=
>>>>>>=0A=
>>>>>> You had also asked for the output below.=0A=
>>>>>>=0A=
>>>>>> # nfsstat -E -s=0A=
>>>>>> BackChannelCtBindConnToSes=0A=
>>>>>>       0            0=0A=
>>>>>>=0A=
>>>>>> # sysctl vfs.nfsd.request_space_throttle_count=0A=
>>>>>> vfs.nfsd.request_space_throttle_count: 0=0A=
>>>>>>=0A=
>>>>>> I see that you are testing a patch and I look forward to seeing the =
results.=0A=
>>>>>>=0A=
>>>>>>=0A=
>>>>>> Jason Breitman=0A=
>>>>>>=0A=
>>>>>>=0A=
>>>>>> On Mar 21, 2021, at 6:21 PM, Rick Macklem <rmacklem@uoguelph.ca<mail=
to:rmacklem@uoguelph.ca>> wrote:=0A=
>>>>>>=0A=
>>>>>> Youssef GHORBAL <youssef.ghorbal@pasteur.fr<mailto:youssef.ghorbal@p=
asteur.fr>> wrote:=0A=
>>>>>>> Hi Jason,=0A=
>>>>>>>=0A=
>>>>>>>> On 17 Mar 2021, at 18:17, Jason Breitman <jbreitman@tildenparkcapi=
tal.com<mailto:jbreitman@tildenparkcapital.com>> wrote:=0A=
>>>>>>>>=0A=
>>>>>>>> Please review the details below and let me know if there is a sett=
ing that I should apply to my FreeBSD NFS Server or if there is a bug fix t=
hat I can apply to resolve my issue.=0A=
>>>>>>>> I shared this information with the linux-nfs mailing list and they=
 believe the issue is on the server side.=0A=
>>>>>>>>=0A=
>>>>>>>> Issue=0A=
>>>>>>>> NFSv4 mounts periodically hang on the NFS Client.=0A=
>>>>>>>>=0A=
>>>>>>>> During this time, it is possible to manually mount from another NF=
S Server on the NFS Client having issues.=0A=
>>>>>>>> Also, other NFS Clients are successfully mounting from the NFS Ser=
ver in question.=0A=
>>>>>>>> Rebooting the NFS Client appears to be the only solution.=0A=
>>>>>>>=0A=
>>>>>>> I had experienced a similar weird situation with periodically stuck=
 Linux NFS clients >mounting Isilon NFS servers (Isilon is FreeBSD based bu=
t they seem to have there >own nfsd)=0A=
>>>>>> Yes, my understanding is that Isilon uses a proprietary user space n=
fsd and=0A=
>>>>>> not the kernel based RPC and nfsd in FreeBSD.=0A=
>>>>>>=0A=
>>>>>>> We=92ve had better luck and we did manage to have packet captures o=
n both sides >during the issue. The gist of it goes like follows:=0A=
>>>>>>>=0A=
>>>>>>> - Data flows correctly between SERVER and the CLIENT=0A=
>>>>>>> - At some point SERVER starts decreasing it's TCP Receive Window un=
til it reachs 0=0A=
>>>>>>> - The client (eager to send data) can only ack data sent by SERVER.=
=0A=
>>>>>>> - When SERVER was done sending data, the client starts sending TCP =
Window >Probes hoping that the TCP Window opens again so he can flush its b=
uffers.=0A=
>>>>>>> - SERVER responds with a TCP Zero Window to those probes.=0A=
>>>>>> Having the window size drop to zero is not necessarily incorrect.=0A=
>>>>>> If the server is overloaded (has a backlog of NFS requests), it can =
stop doing=0A=
>>>>>> soreceive() on the socket (so the socket rcv buffer can fill up and =
the TCP window=0A=
>>>>>> closes). This results in "backpressure" to stop the NFS client from =
flooding the=0A=
>>>>>> NFS server with requests.=0A=
>>>>>> --> However, once the backlog is handled, the nfsd should start to s=
oreceive()=0A=
>>>>>> again and this shouls cause the window to open back up.=0A=
>>>>>> --> Maybe this is broken in the socket/TCP code. I quickly got lost =
in=0A=
>>>>>> tcp_output() when it decides what to do about the rcvwin.=0A=
>>>>>>=0A=
>>>>>>> - After 6 minutes (the NFS server default Idle timeout) SERVER race=
fully closes the >TCP connection sending a FIN Packet (and still a TCP Wind=
ow 0)=0A=
>>>>>> This probably does not happen for Jason's case, since the 6minute ti=
meout=0A=
>>>>>> is disabled when the TCP connection is assigned as a backchannel (mo=
st likely=0A=
>>>>>> the case for NFSv4.1).=0A=
>>>>>>=0A=
>>>>>>> - CLIENT ACK that FIN.=0A=
>>>>>>> - SERVER goes in FIN_WAIT_2 state=0A=
>>>>>>> - CLIENT closes its half part part of the socket and goes in LAST_A=
CK state.=0A=
>>>>>>> - FIN is never sent by the client since there still data in its Sen=
dQ and receiver TCP >Window is still 0. At this stage the client starts sen=
ding TCP Window Probes again >and again hoping that the server opens its TC=
P Window so it can flush it's buffers >and terminate its side of the socket=
.=0A=
>>>>>>> - SERVER keeps responding with a TCP Zero Window to those probes.=
=0A=
>>>>>>> =3D> The last two steps goes on and on for hours/days freezing the =
NFS mount bound >to that TCP session.=0A=
>>>>>>>=0A=
>>>>>>> If we had a situation where CLIENT was responsible for closing the =
TCP Window (and >initiating the TCP FIN first) and server wanting to send d=
ata we=92ll end up in the same >state as you I think.=0A=
>>>>>>>=0A=
>>>>>>> We=92ve never had the root cause of why the SERVER decided to close=
 the TCP >Window and no more acccept data, the fix on the Isilon part was t=
o recycle more >aggressively the FIN_WAIT_2 sockets (net.inet.tcp.fast_finw=
ait2_recycle=3D1 & >net.inet.tcp.finwait2_timeout=3D5000). Once the socket =
recycled and at the next >occurence of CLIENT TCP Window probe, SERVER send=
s a RST, triggering the >teardown of the session on the client side, a new =
TCP handchake, etc and traffic >flows again (NFS starts responding)=0A=
>>>>>>>=0A=
>>>>>>> To avoid rebooting the client (and before the aggressive FIN_WAIT_2=
 was >implemented on the Isilon side) we=92ve added a check script on the c=
lient that detects >LAST_ACK sockets on the client and through iptables rul=
e enforces a TCP RST, >Something like: -A OUTPUT -p tcp -d $nfs_server_addr=
 --sport $local_port -j REJECT >--reject-with tcp-reset (the script removes=
 this iptables rule as soon as the LAST_ACK >disappears)=0A=
>>>>>>>=0A=
>>>>>>> The bottom line would be to have a packet capture during the outage=
 (client and/or >server side), it will show you at least the shape of the T=
CP exchange when NFS is >stuck.=0A=
>>>>>> Interesting story and good work w.r.t. sluething, Youssef, thanks.=
=0A=
>>>>>>=0A=
>>>>>> I looked at Jason's log and it shows everything is ok w.r.t the nfsd=
 threads.=0A=
>>>>>> (They're just waiting for RPC requests.)=0A=
>>>>>> However, I do now think I know why the soclose() does not happen.=0A=
>>>>>> When the TCP connection is assigned as a backchannel, that takes a r=
eference=0A=
>>>>>> cnt on the structure. This refcnt won't be released until the connec=
tion is=0A=
>>>>>> replaced by a BindConnectiotoSession operation from the client. But =
that won't=0A=
>>>>>> happen until the client creates a new TCP connection.=0A=
>>>>>> --> No refcnt release-->no refcnt of 0-->no soclose().=0A=
>>>>>>=0A=
>>>>>> I've created the attached patch (completely different from the previ=
ous one)=0A=
>>>>>> that adds soshutdown(SHUT_WR) calls in the three places where the TC=
P=0A=
>>>>>> connection is going away. This seems to get it past CLOSE_WAIT witho=
ut a=0A=
>>>>>> soclose().=0A=
>>>>>> --> I know you are not comfortable with patching your server, but I =
do think=0A=
>>>>>> this change will get the socket shutdown to complete.=0A=
>>>>>>=0A=
>>>>>> There are a couple more things you can check on the server...=0A=
>>>>>> # nfsstat -E -s=0A=
>>>>>> --> Look for the count under "BindConnToSes".=0A=
>>>>>> --> If non-zero, backchannels have been assigned=0A=
>>>>>> # sysctl -a | fgrep request_space_throttle_count=0A=
>>>>>> --> If non-zero, the server has been overloaded at some point.=0A=
>>>>>>=0A=
>>>>>> I think the attached patch might work around the problem.=0A=
>>>>>> The code that should open up the receive window needs to be checked.=
=0A=
>>>>>> I am also looking at enabling the 6minute timeout when a backchannel=
 is=0A=
>>>>>> assigned.=0A=
>>>>>>=0A=
>>>>>> rick=0A=
>>>>>>=0A=
>>>>>> Youssef=0A=
>>>>>>=0A=
>>>>>> _______________________________________________=0A=
>>>>>> freebsd-net@freebsd.org<mailto:freebsd-net@freebsd.org> mailing list=
=0A=
>>>>>> https://urldefense.com/v3/__https://lists.freebsd.org/mailman/listin=
fo/freebsd-net__;!!JFdNOqOXpB6UZW0!_c2MFNbir59GXudWPVdE5bNBm-qqjXeBuJ2UEmFv=
5OZciLj4ObR_drJNv5yryaERfIbhKR2d$=0A=
>>>>>> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.or=
g<mailto:freebsd-net-unsubscribe@freebsd.org>"=0A=
>>>>>> <xprtdied.patch>=0A=
>>>>>>=0A=
>>>>>> <nfs-hang.log.gz>=0A=
>>>>>>=0A=
>>>>>> _______________________________________________=0A=
>>>>>> freebsd-net@freebsd.org mailing list=0A=
>>>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-net=0A=
>>>>>> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.or=
g"=0A=
>>>>>> _______________________________________________=0A=
>>>>>> freebsd-net@freebsd.org mailing list=0A=
>>>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-net=0A=
>>>>>> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.or=
g"=0A=
>>>>>=0A=
>>>>> _______________________________________________=0A=
>>>>> freebsd-net@freebsd.org mailing list=0A=
>>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-net=0A=
>>>>> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org=
"=0A=
>>>>> _______________________________________________=0A=
>>>>> freebsd-net@freebsd.org mailing list=0A=
>>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-net=0A=
>>>>> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org=
"=0A=
>>>>=0A=
>>>=0A=
>>=0A=
>> _______________________________________________=0A=
>> freebsd-net@freebsd.org mailing list=0A=
>> https://lists.freebsd.org/mailman/listinfo/freebsd-net=0A=
>> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"=
=0A=
>=0A=
> _______________________________________________=0A=
> freebsd-net@freebsd.org mailing list=0A=
> https://lists.freebsd.org/mailman/listinfo/freebsd-net=0A=
> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"=0A=
=0A=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?YQXPR0101MB09682EC465091A378DD679CEDD729>