From owner-freebsd-net@freebsd.org Mon Apr 5 23:24:08 2021 Return-Path: Delivered-To: freebsd-net@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 8EAD75CA538 for ; Mon, 5 Apr 2021 23:24:08 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from CAN01-TO1-obe.outbound.protection.outlook.com (mail-to1can01on0629.outbound.protection.outlook.com [IPv6:2a01:111:f400:fe5d::629]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mail.protection.outlook.com", Issuer "DigiCert Cloud Services CA-1" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4FDmvC2kPJz3tJv; Mon, 5 Apr 2021 23:24:06 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mWELLvW/hB8GdGZaS3MpvESjplaqWbnQiXuKck2Su05mKQ+c+5TzD/TWSKoA4XwyZh8gMoc3TnmxZtcbGl0tluh+Xh/7W9CItwGy7RchuAnX+N+qKVaUHio40z7FaG3okxyRxR0TeK7Tvv3uC2VCaJKtgMv7gGt8dGtcDlcjXfa8kKC4x2c4vfzZsgdRh43WStsKg9Y3TSB38x+Xh3x14ofsUgNZQAUVymPFRWdrzXZd4KReD61UJDecIcf7kR6A0l9MtTrmfpuIe21T+1fvSOAsiw/NdtwPZCJHX3YYp59s/rLuD4zIEFPP8qbcyrTEIjBiEzWn1FxAP0oXRQxCLQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=IL0GlITw+8Ksf/vs10QibBX2TCpvcVtd2nkyR5wQ7VU=; b=OKtGM+313t+rffxekKXDtM2d7DCJuySvGNXDXNW5txzm8AD35poSqFLMxIJmLcMVzQo1ZYbbTPTjcD/l9PJbLeGfa3YappzO8LK4ujeuBdUvvBi+uYra8rQQ4Tb3bGIg/5KK6NrCx9jYrxhit3C0yuGt6mtlvPCaWekZxnfuZvEDIAepHFFJBnAaxo86k7Yw3LwqOUKkFBue+5+Ekt8wzo/aXbEg/wzvrm6GufM8uFsAvnTIyUP0wEmX75eX5sSmbKRaPPuC4Ty0ngKSl8ranw0z6LEwVHNAwbPWXMWW4T+4yLzoHDPP4XLu4F4S6i+pQN7PMCpgyFQ7BpvGFlOiSw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=uoguelph.ca; dmarc=pass action=none header.from=uoguelph.ca; dkim=pass header.d=uoguelph.ca; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=uoguelph.ca; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=IL0GlITw+8Ksf/vs10QibBX2TCpvcVtd2nkyR5wQ7VU=; b=d2Wnc5Iy6d4Hoaq4qgouA2Lztniz+89IsZau2Uu293xJjz++sOKf3DuUN/ZuGP/JvxTNdIIfiOCPzNx7sGFReDdRhIhhcTnPumSkbfxU+TOgUA+BzClILXloQ66/1JjWg0bL+LGbtdhxas/1rHVObIWiPEPhSncfRwELV3bQG7Ndaj+ZVCD7q18cCmmOeb/BavKZxq2+1kuMEktFeQgO11+HpcKgRY+2i3hLQKVGyd40B9x5wEQFkwqxZrHS7G1LwPU6iw6qUCzhn0TQTSrnAXB2H/s+JgYqG2Gw0wTCkwteMTRm4b+KP7siDR8G9rWjg9nOW/kwuLP13BQgXPMTew== Received: from YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM (2603:10b6:c00:19::29) by YQBPR0101MB0899.CANPRD01.PROD.OUTLOOK.COM (2603:10b6:c00:4::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3999.32; Mon, 5 Apr 2021 23:24:04 +0000 Received: from YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM ([fe80::1c05:585a:132a:f08e]) by YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM ([fe80::1c05:585a:132a:f08e%4]) with mapi id 15.20.3999.032; Mon, 5 Apr 2021 23:24:04 +0000 From: Rick Macklem To: "tuexen@freebsd.org" CC: "Scheffenegger, Richard" , Youssef GHORBAL , "freebsd-net@freebsd.org" Subject: Re: NFS Mount Hangs Thread-Topic: NFS Mount Hangs Thread-Index: AQHXG1G2D7AHBwtmAkS1jBAqNNo2I6qMDIgAgALy8kyACNDugIAAsfOAgAfoFLeAARWpAIAAUOsEgAKJ2oCAADW73YAAG5EAgAA+DUKAAB1JAIAACEqkgAEOcgCAAI4UZg== Date: Mon, 5 Apr 2021 23:24:04 +0000 Message-ID: References: <3750001D-3F1C-4D9A-A9D9-98BCA6CA65A4@tildenparkcapital.com> <33693DE3-7FF8-4FAB-9A75-75576B88A566@tildenparkcapital.com> <8E745920-1092-4312-B251-B49D11FE8028@pasteur.fr> <765CE1CD-6AAB-4BEF-97C6-C2A1F0FF4AC5@freebsd.org> <2B189169-C0C9-4DE6-A01A-BE916F10BABA@freebsd.org> , In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 6a01662b-c7e0-4e90-8dc9-08d8f889e74b x-ms-traffictypediagnostic: YQBPR0101MB0899: x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:2089; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: Sz30PP1nPBm5xV5bVTWZjwsBvTWLzdYCarKlXEMFcoqYJMpotwoj9mQ2M28rIh3W+YbVvAAHM8hmcrN2e8/TrApGebypJKScYBcNH/YfsKdKNt05Rrb0hJnd7V1BiY5KY4QknL4nGxRlQkPnAXCs4N1bUW9g8rGu1X7IjOi78Jv2vyC83cIsstoW37riD2VUo1aKgVlJytHshnqbCb+L2WdD4DbQSOPsOyBnPcHm55xFXP2wZ6Jwhaec2w9j5UjVPnFM7fU5zxTctU0xV7g1wTSk/Rb2VcFtdw0jhcaUphazqnqFDOWyRRQWKiKrKeJqHB3GzlwfAum3KfZXkGVKKKClmA6Gd+9OOF5oPneINtOLOxeJCPhnkt4HgYWR0SzdIF6pgQnFShMi0HvO3/fR0fIdJrGXNdvUWmGFVM79zAKzcVrEAsJQDrq4GGoHSyqomm4v+Hl0mddlF+RTERfy+RMeFVCMLfUHjsMm3gx21tPFP36plBy94cMklSnSkZVvWTRn4hOnQn64WkUFdMYmnYBoAWcnOE6nM/+Lg+TJu9xAby47TaDygForgtAj2ff9ssNkxvXner8IfH+6934fkLIgzOStFwtfBcT3ymIK3O/usBnXploFw02SlH3Gz4DxliCe7s3hTR5bWgXNKaRHC47ni0YIZg20ezd2wjBCVdKvdSCJx0vsaK6FbRSfCL+WZb9sjMAteQXtBtEmLdGlKfavk66T8H0lmLdrNa6fR5w= x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM; PTR:; CAT:NONE; SFS:(346002)(39860400002)(366004)(136003)(396003)(376002)(71200400001)(76116006)(91956017)(5660300002)(66946007)(66446008)(64756008)(66556008)(66476007)(83380400001)(6506007)(478600001)(8676002)(7696005)(52536014)(7116003)(9686003)(38100700001)(33656002)(2906002)(30864003)(4326008)(316002)(86362001)(786003)(966005)(3480700007)(54906003)(55016002)(186003)(66574015)(8936002)(53546011)(6916009)(579004)(559001); DIR:OUT; SFP:1101; x-ms-exchange-antispam-messagedata: =?Windows-1252?Q?L6AO4BfogyOs9Jq7S/tPLRib7Oo/j+AR0SyVB2cq6j0UOfQBCIA1m+VA?= =?Windows-1252?Q?1B5x9EvTStags4srL95czeQdfOe/7Kq9TQ2IhGy3PY0NM6+Omklkm7Te?= =?Windows-1252?Q?x2f+LSWzPMvBPjXTfaVk2Aac0Me3u3kWSM4uX/uP+dk++zWBXKM9vJ3P?= =?Windows-1252?Q?Rd0pxwXB1GnDfSETk9wguXo+UoCZtYb1UU1s9/WVd6RspHzbw8Dhe4VC?= =?Windows-1252?Q?mChtlF9ZwWpdzv3XC+BrRY90CpZzcb6VgW3rO4cXBBxHaOwwxQDVSCB6?= =?Windows-1252?Q?DplP2T8UV364EjpeZLqyPyASF2d5C2C0YIhGBC+JmegNAnnezvEY2D7d?= =?Windows-1252?Q?OCLvM3wqXkCnyBphw+qDVBUIzkgEu1zzlNVtsDpZ/hnjKntRgJSPEtho?= =?Windows-1252?Q?WjVY+w4KMbTkqc7MLGntpIwXhgbJj37rvUfQZNEqLGo0bliQQ/kPyFwI?= =?Windows-1252?Q?cE320h1EO6Cv/72WExc7Gj1h5U8ogtr89kw+Fq5PeIHII5qkpmHyGxSZ?= =?Windows-1252?Q?gd7u3gK17UdrDgY1nIuMEg5JIVhPX3boWcnVnhKoeIHl+SH4c3sPwSbl?= =?Windows-1252?Q?lxb/1cDJxt/jNi+YkbR0mq9q0zDvwXb0b4idBgvVOzkQoPu7DO7XjiNi?= =?Windows-1252?Q?9Adjv5UXdcc3h+71d8S430beCVCAUPamJPypaqci/R0a0C0khFoGuN76?= =?Windows-1252?Q?p3pTHRW3bEOMgihGq0YNvx70MuRbk11Rme4aHhaURrzdJRP8WGiMzOgM?= =?Windows-1252?Q?aadcQFhAri/1CxJDQiQjLzd9uOz+Q2+Mfdu7NS9TaRqVEmOKvJXolDuK?= =?Windows-1252?Q?2luKxbbYbms7bEGAnPsWLTiQyMVQ11LEWQByajrtpnTJNVtlSv6DrSOQ?= =?Windows-1252?Q?Az1epSD1Dq4IWKe1epbXwJJK6lcZy99/jXuEYw2w6rlRCaZ86/h9JTMi?= =?Windows-1252?Q?iKfdjN03yZLmVntAqHY4yUXdNlApORngtE7hAzLcDuWxjnyJgW0xKqsa?= =?Windows-1252?Q?EJuj5cji7tRb8N7McFsI8xYfzWGdqZ86a8Y1D+vPk9aTFNsdmjPcOUsr?= =?Windows-1252?Q?MZcmXgd7+Pc2bHP8KWTKRrh4bzparB0EbdQfxy4HpFbcIsgP9SjbeQys?= =?Windows-1252?Q?w+XTf2JvVJyTh8y5HsmlCnfi+G0ulsdepMYZI37P/594nz9cxG8Xf70i?= =?Windows-1252?Q?aFBvcUjDBHmSIsWgxqqaTH8SyEmNalHtGsN33fhAND8QC2gm4gbJz95l?= =?Windows-1252?Q?poNTNWmqEdx+PDFOCfzumHdWVpXXNyjfx70Kn9yIw26A06PIyurawCy9?= =?Windows-1252?Q?QrazHq5XlxlYZGp9Thsk7MDj84cmg/jJ4tO1/k9AMQVbyQ8H1wSiayOe?= =?Windows-1252?Q?etnfAUa9KAU9QpBlYPlN3VLMn1jnQutZ4f2oGf6KRD07si9ROfxgRQAx?= =?Windows-1252?Q?P7dSR7g3STx/0LsPu52A8+fxWSbwNCKAsa5u4gpejXShLPhkN6WYHP3l?= =?Windows-1252?Q?gmQm3GBi?= x-ms-exchange-transport-forked: True Content-Type: text/plain; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: uoguelph.ca X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM X-MS-Exchange-CrossTenant-Network-Message-Id: 6a01662b-c7e0-4e90-8dc9-08d8f889e74b X-MS-Exchange-CrossTenant-originalarrivaltime: 05 Apr 2021 23:24:04.6572 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: be62a12b-2cad-49a1-a5fa-85f4f3156a7d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: RmZj4M8GVrSQXgirfjCEq3txxkVWOC1+qGimrQ1e/NrAcUjSiOjv894jFeC2RK66gRuAQB0vUCyiGFnjStDt2A== X-MS-Exchange-Transport-CrossTenantHeadersStamped: YQBPR0101MB0899 X-Rspamd-Queue-Id: 4FDmvC2kPJz3tJv X-Spamd-Bar: ------ Authentication-Results: mx1.freebsd.org; dkim=pass header.d=uoguelph.ca header.s=selector1 header.b=d2Wnc5Iy; arc=pass (microsoft.com:s=arcselector9901:i=1); dmarc=pass (policy=none) header.from=uoguelph.ca; spf=pass (mx1.freebsd.org: domain of rmacklem@uoguelph.ca designates 2a01:111:f400:fe5d::629 as permitted sender) smtp.mailfrom=rmacklem@uoguelph.ca X-Spamd-Result: default: False [-6.00 / 15.00]; TO_DN_EQ_ADDR_SOME(0.00)[]; RBL_DBL_DONT_QUERY_IPS(0.00)[2a01:111:f400:fe5d::629:from]; R_DKIM_ALLOW(-0.20)[uoguelph.ca:s=selector1]; FREEFALL_USER(0.00)[rmacklem]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[4]; TO_DN_SOME(0.00)[]; R_SPF_ALLOW(-0.20)[+ip6:2a01:111:f400::/48]; MIME_GOOD(-0.10)[text/plain]; NEURAL_HAM_LONG(-1.00)[-1.000]; SPAMHAUS_ZRD(0.00)[2a01:111:f400:fe5d::629:from:127.0.2.255]; RCVD_COUNT_THREE(0.00)[3]; TO_MATCH_ENVRCPT_SOME(0.00)[]; DWL_DNSWL_LOW(-1.00)[uoguelph.ca:dkim]; DKIM_TRACE(0.00)[uoguelph.ca:+]; DMARC_POLICY_ALLOW(-0.50)[uoguelph.ca,none]; NEURAL_HAM_SHORT(-1.00)[-1.000]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+]; RCVD_TLS_LAST(0.00)[]; ASN(0.00)[asn:8075, ipnet:2a01:111:f000::/36, country:US]; ARC_ALLOW(-1.00)[microsoft.com:s=arcselector9901:i=1]; MAILMAN_DEST(0.00)[freebsd-net] X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 Apr 2021 23:24:08 -0000 tuexen@freebsd.org wrote:=0A= [stuff snipped]=0A= >OK. What is the FreeBSD version you are using?=0A= main Dec. 23, 2020.=0A= =0A= >=0A= >It seems that the TCP connection on the FreeBSD is still alive,=0A= >Linux has decided to start a new TCP connection using the old=0A= >port numbers. So it sends a SYN. The response is a challenge ACK=0A= >and Linux responds with a RST. This looks good so far. However,=0A= >FreeBSD should accept the RST and kill the TCP connection. The=0A= >next SYN from the Linux side would establish a new TCP connection.=0A= >=0A= >So I'm wondering why the RST is not accepted. I made the timestamp=0A= >checking stricter but introduced a bug where RST segments without=0A= >timestamps were ignored. This was fixed.=0A= >=0A= >Introduced in main on 2020/11/09:=0A= > https://svnweb.freebsd.org/changeset/base/367530=0A= >Introduced in stable/12 on 2020/11/30:=0A= > https://svnweb.freebsd.org/changeset/base/36818=0A= >Fix in main on 2021/01/13:=0A= > https://cgit.FreeBSD.org/src/commit/?id=3Dcc3c34859eab1b317d0f38731355b5= 3f7d978c97=0A= >Fix in stable/12 on 2021/01/24:=0A= > https://cgit.FreeBSD.org/src/commit/?id=3Dd05d908d6d3c85479c84c707f93114= 8439ae826b=0A= >=0A= >Are you using a version which is affected by this bug?=0A= I was. Now I've applied the patch.=0A= Bad News. It did not fix the problem.=0A= It still gets into an endless "ignore RST" and stay established when=0A= the Send-Q is empty.=0A= =0A= If the Send-Q is non-empty when I partition, it recovers fine,=0A= sometimes not even needing to see an RST.=0A= =0A= rick=0A= ps: If you think there might be other recent changes that matter,=0A= just say the word and I'll upgrade to bits de jur.=0A= =0A= rick=0A= =0A= Best regards=0A= Michael=0A= >=0A= > If I wait long enough before healing the partition, it will=0A= > go to FIN_WAIT_1, and then if I plug it back in, it does not=0A= > do battle (at least not for long).=0A= >=0A= > Btw, I have one running now that seems stuck really good.=0A= > It has been 20minutes since I plugged the net cable back in.=0A= > (Unfortunately, I didn't have tcpdump running until after=0A= > I saw it was not progressing after healing.=0A= > --> There is one difference. There was a 6minute timeout=0A= > enabled on the server krpc for "no activity", which is=0A= > now disabled like it is for NFSv4.1 in freebsd-current.=0A= > I had forgotten to re-disable it.=0A= > So, when it does battle, it might have been the 6minute=0A= > timeout, which would then do the soshutdown(..SHUT_WR)=0A= > which kept it from getting "stuck" forever.=0A= > -->This time I had to reboot the FreeBSD NFS server to=0A= > get the Linux client unstuck, so this one looked a lot=0A= > like what has been reported.=0A= > The pcap for this one, started after the network was plugged=0A= > back in and I noticed it was stuck for quite a while is here:=0A= > fetch https://people.freebsd.org/~rmacklem/stuck.pcap=0A= >=0A= > In it, there is just a bunch of RST followed by SYN sent=0A= > from client->FreeBSD and FreeBSD just keeps sending=0A= > acks for the old segment back.=0A= > --> It looks like FreeBSD did the "RST, ACK" after the=0A= > krpc did a soshutdown(..SHUT_WR) on the socket,=0A= > for the one you've been looking at.=0A= > I'll test some more...=0A= >=0A= >> I would like to understand why the reestablishment of the connection=0A= >> did not work...=0A= > It is looking like it takes either a non-empty send-q or a=0A= > soshutdown(..SHUT_WR) to get the FreeBSD socket=0A= > out of established, where it just ignores the RSTs and=0A= > SYN packets.=0A= >=0A= > Thanks for looking at it, rick=0A= >=0A= > Best regards=0A= > Michael=0A= >>=0A= >> Have fun with it, rick=0A= >>=0A= >>=0A= >> ________________________________________=0A= >> From: tuexen@freebsd.org =0A= >> Sent: Sunday, April 4, 2021 12:41 PM=0A= >> To: Rick Macklem=0A= >> Cc: Scheffenegger, Richard; Youssef GHORBAL; freebsd-net@freebsd.org=0A= >> Subject: Re: NFS Mount Hangs=0A= >>=0A= >> CAUTION: This email originated from outside of the University of Guelph.= Do not click links or open attachments unless you recognize the sender and= know the content is safe. If in doubt, forward suspicious emails to IThelp= @uoguelph.ca=0A= >>=0A= >>=0A= >>> On 4. Apr 2021, at 17:27, Rick Macklem wrote:=0A= >>>=0A= >>> Well, I'm going to cheat and top post, since this is elated info. and= =0A= >>> not really part of the discussion...=0A= >>>=0A= >>> I've been testing network partitioning between a Linux client (5.2 kern= el)=0A= >>> and a FreeBSD-current NFS server. I have not gotten a solid hang, but= =0A= >>> I have had the Linux client doing "battle" with the FreeBSD server for= =0A= >>> several minutes after un-partitioning the connection.=0A= >>>=0A= >>> The battle basically consists of the Linux client sending an RST, follo= wed=0A= >>> by a SYN.=0A= >>> The FreeBSD server ignores the RST and just replies with the same old a= ck.=0A= >>> --> This varies from "just a SYN" that succeeds to 100+ cycles of the a= bove=0A= >>> over several minutes.=0A= >>>=0A= >>> I had thought that an RST was a "pretty heavy hammer", but FreeBSD seem= s=0A= >>> pretty good at ignoring it.=0A= >>>=0A= >>> A full packet capture of one of these is in /home/rmacklem/linuxtofreen= fs.pcap=0A= >>> in case anyone wants to look at it.=0A= >> On freefall? I would like to take a look at it...=0A= >>=0A= >> Best regards=0A= >> Michael=0A= >>>=0A= >>> Here's a tcpdump snippet of the interesting part (see the *** comments)= :=0A= >>> 19:10:09.305775 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.ap= ex-mesh: Flags [P.], seq 202585:202749, ack 212293, win 29128, options [nop= ,nop,TS val 2073636037 ecr 2671204825], length 164: NFS reply xid 613153685= reply ok 160 getattr NON 4 ids 0/33554432 sz 0=0A= >>> 19:10:09.305850 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.ri= ck.nfsd: Flags [.], ack 202749, win 501, options [nop,nop,TS val 2671204825= ecr 2073636037], length 0=0A= >>> *** Network is now partitioned...=0A= >>>=0A= >>> 19:10:09.407840 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.ri= ck.nfsd: Flags [P.], seq 212293:212525, ack 202749, win 501, options [nop,n= op,TS val 2671204927 ecr 2073636037], length 232: NFS request xid 629930901= 228 getattr fh 0,1/53=0A= >>> 19:10:09.615779 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.ri= ck.nfsd: Flags [P.], seq 212293:212525, ack 202749, win 501, options [nop,n= op,TS val 2671205135 ecr 2073636037], length 232: NFS request xid 629930901= 228 getattr fh 0,1/53=0A= >>> 19:10:09.823780 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.ri= ck.nfsd: Flags [P.], seq 212293:212525, ack 202749, win 501, options [nop,n= op,TS val 2671205343 ecr 2073636037], length 232: NFS request xid 629930901= 228 getattr fh 0,1/53=0A= >>> *** Lots of lines snipped.=0A= >>>=0A= >>>=0A= >>> 19:13:41.295783 ARP, Request who-has nfsv4-new3.home.rick tell nfsv4-li= nux.home.rick, length 28=0A= >>> 19:13:42.319767 ARP, Request who-has nfsv4-new3.home.rick tell nfsv4-li= nux.home.rick, length 28=0A= >>> 19:13:46.351966 ARP, Request who-has nfsv4-new3.home.rick tell nfsv4-li= nux.home.rick, length 28=0A= >>> 19:13:47.375790 ARP, Request who-has nfsv4-new3.home.rick tell nfsv4-li= nux.home.rick, length 28=0A= >>> 19:13:48.399786 ARP, Request who-has nfsv4-new3.home.rick tell nfsv4-li= nux.home.rick, length 28=0A= >>> *** Network is now unpartitioned...=0A= >>>=0A= >>> 19:13:48.399990 ARP, Reply nfsv4-new3.home.rick is-at d4:be:d9:07:81:72= (oui Unknown), length 46=0A= >>> 19:13:48.400002 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.ri= ck.nfsd: Flags [S], seq 416692300, win 64240, options [mss 1460,sackOK,TS v= al 2671421871 ecr 0,nop,wscale 7], length 0=0A= >>> 19:13:48.400185 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.ap= ex-mesh: Flags [.], ack 212293, win 29127, options [nop,nop,TS val 20738551= 37 ecr 2671204825], length 0=0A= >>> 19:13:48.400273 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.ri= ck.nfsd: Flags [R], seq 964161458, win 0, length 0=0A= >>> 19:13:49.423833 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.ri= ck.nfsd: Flags [S], seq 416692300, win 64240, options [mss 1460,sackOK,TS v= al 2671424943 ecr 0,nop,wscale 7], length 0=0A= >>> 19:13:49.424056 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.ap= ex-mesh: Flags [.], ack 212293, win 29127, options [nop,nop,TS val 20738561= 61 ecr 2671204825], length 0=0A= >>> *** This "battle" goes on for 223sec...=0A= >>> I snipped out 13 cycles of this "Linux sends an RST, followed by SYN"= =0A= >>> "FreeBSD replies with same old ACK". In another test run I saw this=0A= >>> cycle continue non-stop for several minutes. This time, the Linux=0A= >>> client paused for a while (see ARPs below).=0A= >>>=0A= >>> 19:13:49.424101 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.ri= ck.nfsd: Flags [R], seq 964161458, win 0, length 0=0A= >>> 19:13:53.455867 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.ri= ck.nfsd: Flags [S], seq 416692300, win 64240, options [mss 1460,sackOK,TS v= al 2671428975 ecr 0,nop,wscale 7], length 0=0A= >>> 19:13:53.455991 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.ap= ex-mesh: Flags [.], ack 212293, win 29127, options [nop,nop,TS val 20738601= 93 ecr 2671204825], length 0=0A= >>> *** Snipped a bunch of stuff out, mostly ARPs, plus one more RST.=0A= >>>=0A= >>> 19:16:57.775780 ARP, Request who-has nfsv4-new3.home.rick tell nfsv4-li= nux.home.rick, length 28=0A= >>> 19:16:57.775937 ARP, Reply nfsv4-new3.home.rick is-at d4:be:d9:07:81:72= (oui Unknown), length 46=0A= >>> 19:16:57.980240 ARP, Request who-has nfsv4-new3.home.rick tell 192.168.= 1.254, length 46=0A= >>> 19:16:58.555663 ARP, Request who-has nfsv4-new3.home.rick tell 192.168.= 1.254, length 46=0A= >>> 19:17:00.104701 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.ap= ex-mesh: Flags [F.], seq 202749, ack 212293, win 29128, options [nop,nop,TS= val 2074046846 ecr 2671204825], length 0=0A= >>> 19:17:15.664354 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.ap= ex-mesh: Flags [F.], seq 202749, ack 212293, win 29128, options [nop,nop,TS= val 2074062406 ecr 2671204825], length 0=0A= >>> 19:17:31.239246 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.ap= ex-mesh: Flags [R.], seq 202750, ack 212293, win 0, options [nop,nop,TS val= 2074077981 ecr 2671204825], length 0=0A= >>> *** FreeBSD finally acknowledges the RST 38sec after Linux sent the las= t=0A= >>> of 13 (100+ for another test run).=0A= >>>=0A= >>> 19:17:51.535979 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.ri= ck.nfsd: Flags [S], seq 4247692373, win 64240, options [mss 1460,sackOK,TS = val 2671667055 ecr 0,nop,wscale 7], length 0=0A= >>> 19:17:51.536130 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.ap= ex-mesh: Flags [S.], seq 661237469, ack 4247692374, win 65535, options [mss= 1460,nop,wscale 6,sackOK,TS val 2074098278 ecr 2671667055], length 0=0A= >>> *** Now back in business...=0A= >>>=0A= >>> 19:17:51.536218 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.ri= ck.nfsd: Flags [.], ack 1, win 502, options [nop,nop,TS val 2671667055 ecr = 2074098278], length 0=0A= >>> 19:17:51.536295 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.ri= ck.nfsd: Flags [P.], seq 1:233, ack 1, win 502, options [nop,nop,TS val 267= 1667056 ecr 2074098278], length 232: NFS request xid 629930901 228 getattr = fh 0,1/53=0A= >>> 19:17:51.536346 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.ri= ck.nfsd: Flags [P.], seq 233:505, ack 1, win 502, options [nop,nop,TS val 2= 671667056 ecr 2074098278], length 272: NFS request xid 697039765 132 getatt= r fh 0,1/53=0A= >>> 19:17:51.536515 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.ap= ex-mesh: Flags [.], ack 505, win 29128, options [nop,nop,TS val 2074098279 = ecr 2671667056], length 0=0A= >>> 19:17:51.536553 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.ri= ck.nfsd: Flags [P.], seq 505:641, ack 1, win 502, options [nop,nop,TS val 2= 671667056 ecr 2074098279], length 136: NFS request xid 730594197 132 getatt= r fh 0,1/53=0A= >>> 19:17:51.536562 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.ap= ex-mesh: Flags [P.], seq 1:49, ack 505, win 29128, options [nop,nop,TS val = 2074098279 ecr 2671667056], length 48: NFS reply xid 697039765 reply ok 44 = getattr ERROR: unk 10063=0A= >>>=0A= >>> This error 10063 after the partition heals is also "bad news". It indic= ates the Session=0A= >>> (which is supposed to maintain "exactly once" RPC semantics is broken).= I'll admit I=0A= >>> suspect a Linux client bug, but will be investigating further.=0A= >>>=0A= >>> So, hopefully TCP conversant folk can confirm if the above is correct b= ehaviour=0A= >>> or if the RST should be ack'd sooner?=0A= >>>=0A= >>> I could also see this becoming a "forever" TCP battle for other version= s of Linux client.=0A= >>>=0A= >>> rick=0A= >>>=0A= >>>=0A= >>> ________________________________________=0A= >>> From: Scheffenegger, Richard =0A= >>> Sent: Sunday, April 4, 2021 7:50 AM=0A= >>> To: Rick Macklem; tuexen@freebsd.org=0A= >>> Cc: Youssef GHORBAL; freebsd-net@freebsd.org=0A= >>> Subject: Re: NFS Mount Hangs=0A= >>>=0A= >>> CAUTION: This email originated from outside of the University of Guelph= . Do not click links or open attachments unless you recognize the sender an= d know the content is safe. If in doubt, forward suspicious emails to IThel= p@uoguelph.ca=0A= >>>=0A= >>>=0A= >>> For what it=91s worth, suse found two bugs in the linux nfconntrack (st= ateful firewall), and pfifo-fast scheduler, which could conspire to make tc= p sessions hang forever.=0A= >>>=0A= >>> One is a missed updaten when the c=F6ient is not using the noresvport m= oint option, which makes tje firewall think rsts are illegal (and drop them= );=0A= >>>=0A= >>> The fast scheduler can run into an issue if only a single packet should= be forwarded (note that this is not the default scheduler, but often recom= mended for perf, as it runs lockless and lower cpu cost that pfq (default).= If no other/additional packet pushes out that last packet of a flow, it ca= n become stuck forever...=0A= >>>=0A= >>> I can try getting the relevant bug info next week...=0A= >>>=0A= >>> ________________________________=0A= >>> Von: owner-freebsd-net@freebsd.org im A= uftrag von Rick Macklem =0A= >>> Gesendet: Friday, April 2, 2021 11:31:01 PM=0A= >>> An: tuexen@freebsd.org =0A= >>> Cc: Youssef GHORBAL ; freebsd-net@freebsd.o= rg =0A= >>> Betreff: Re: NFS Mount Hangs=0A= >>>=0A= >>> NetApp Security WARNING: This is an external email. Do not click links = or open attachments unless you recognize the sender and know the content is= safe.=0A= >>>=0A= >>>=0A= >>>=0A= >>>=0A= >>> tuexen@freebsd.org wrote:=0A= >>>>> On 2. Apr 2021, at 02:07, Rick Macklem wrote:= =0A= >>>>>=0A= >>>>> I hope you don't mind a top post...=0A= >>>>> I've been testing network partitioning between the only Linux client= =0A= >>>>> I have (5.2 kernel) and a FreeBSD server with the xprtdied.patch=0A= >>>>> (does soshutdown(..SHUT_WR) when it knows the socket is broken)=0A= >>>>> applied to it.=0A= >>>>>=0A= >>>>> I'm not enough of a TCP guy to know if this is useful, but here's wha= t=0A= >>>>> I see...=0A= >>>>>=0A= >>>>> While partitioned:=0A= >>>>> On the FreeBSD server end, the socket either goes to CLOSED during=0A= >>>>> the network partition or stays ESTABLISHED.=0A= >>>> If it goes to CLOSED you called shutdown(, SHUT_WR) and the peer also= =0A= >>>> sent a FIN, but you never called close() on the socket.=0A= >>>> If the socket stays in ESTABLISHED, there is no communication ongoing,= =0A= >>>> I guess, and therefore the server does not even detect that the peer= =0A= >>>> is not reachable.=0A= >>>>> On the Linux end, the socket seems to remain ESTABLISHED for a=0A= >>>>> little while, and then disappears.=0A= >>>> So how does Linux detect the peer is not reachable?=0A= >>> Well, here's what I see in a packet capture in the Linux client once=0A= >>> I partition it (just unplug the net cable):=0A= >>> - lots of retransmits of the same segment (with ACK) for 54sec=0A= >>> - then only ARP queries=0A= >>>=0A= >>> Once I plug the net cable back in:=0A= >>> - ARP works=0A= >>> - one more retransmit of the same segement=0A= >>> - receives RST from FreeBSD=0A= >>> ** So, is this now a "new" TCP connection, despite=0A= >>> using the same port#.=0A= >>> --> It matters for NFS, since "new connection"=0A= >>> implies "must retry all outstanding RPCs".=0A= >>> - sends SYN=0A= >>> - receives SYN, ACK from FreeBSD=0A= >>> --> connection starts working again=0A= >>> Always uses same port#.=0A= >>>=0A= >>> On the FreeBSD server end:=0A= >>> - receives the last retransmit of the segment (with ACK)=0A= >>> - sends RST=0A= >>> - receives SYN=0A= >>> - sends SYN, ACK=0A= >>>=0A= >>> I thought that there was no RST in the capture I looked at=0A= >>> yesterday, so I'm not sure if FreeBSD always sends an RST,=0A= >>> but the Linux client behaviour was the same. (Sent a SYN, etc).=0A= >>> The socket disappears from the Linux "netstat -a" and I=0A= >>> suspect that happens after about 54sec, but I am not sure=0A= >>> about the timing.=0A= >>>=0A= >>>>>=0A= >>>>> After unpartitioning:=0A= >>>>> On the FreeBSD server end, you get another socket showing up at=0A= >>>>> the same port#=0A= >>>>> Active Internet connections (including servers)=0A= >>>>> Proto Recv-Q Send-Q Local Address Foreign Address (st= ate)=0A= >>>>> tcp4 0 0 nfsv4-new3.nfsd nfsv4-linux.678 EST= ABLISHED=0A= >>>>> tcp4 0 0 nfsv4-new3.nfsd nfsv4-linux.678 CLO= SED=0A= >>>>>=0A= >>>>> The Linux client shows the same connection ESTABLISHED.=0A= >>> But disappears from "netstat -a" for a while during the partitioning.= =0A= >>>=0A= >>>>> (The mount sometimes reports an error. I haven't looked at packet=0A= >>>>> traces to see if it retries RPCs or why the errors occur.)=0A= >>> I have now done so, as above.=0A= >>>=0A= >>>>> --> However I never get hangs.=0A= >>>>> Sometimes it goes to SYN_SENT for a while and the FreeBSD server=0A= >>>>> shows FIN_WAIT_1, but then both ends go to ESTABLISHED and the=0A= >>>>> mount starts working again.=0A= >>>>>=0A= >>>>> The most obvious thing is that the Linux client always keeps using=0A= >>>>> the same port#. (The FreeBSD client will use a different port# when= =0A= >>>>> it does a TCP reconnect after no response from the NFS server for=0A= >>>>> a little while.)=0A= >>>>>=0A= >>>>> What do those TCP conversant think?=0A= >>>> I guess you are you are never calling close() on the socket, for with= =0A= >>>> the connection state is CLOSED.=0A= >>> Ok, that makes sense. For this case the Linux client has not done a=0A= >>> BindConnectionToSession to re-assign the back channel.=0A= >>> I'll have to bug them about this. However, I'll bet they'll answer=0A= >>> that I have to tell them the back channel needs re-assignment=0A= >>> or something like that.=0A= >>>=0A= >>> I am pretty certain they are broken, in that the client needs to=0A= >>> retry all outstanding RPCs.=0A= >>>=0A= >>> For others, here's the long winded version of this that I just=0A= >>> put on the phabricator review:=0A= >>> In the server side kernel RPC, the socket (struct socket *) is in a=0A= >>> structure called SVCXPRT (normally pointed to by "xprt").=0A= >>> These structures a ref counted and the soclose() is done=0A= >>> when the ref. cnt goes to zero. My understanding is that=0A= >>> "struct socket *" is free'd by soclose() so this cannot be done=0A= >>> before the xprt ref. cnt goes to zero.=0A= >>>=0A= >>> For NFSv4.1/4.2 there is something called a back channel=0A= >>> which means that a "xprt" is used for server->client RPCs,=0A= >>> although the TCP connection is established by the client=0A= >>> to the server.=0A= >>> --> This back channel holds a ref cnt on "xprt" until the=0A= >>>=0A= >>> client re-assigns it to a different TCP connection=0A= >>> via an operation called BindConnectionToSession=0A= >>> and the Linux client is not doing this soon enough,=0A= >>> it appears.=0A= >>>=0A= >>> So, the soclose() is delayed, which is why I think the=0A= >>> TCP connection gets stuck in CLOSE_WAIT and that is=0A= >>> why I've added the soshutdown(..SHUT_WR) calls,=0A= >>> which can happen before the client gets around to=0A= >>> re-assigning the back channel.=0A= >>>=0A= >>> Thanks for your help with this Michael, rick=0A= >>>=0A= >>> Best regards=0A= >>> Michael=0A= >>>>=0A= >>>> rick=0A= >>>> ps: I can capture packets while doing this, if anyone has a use=0A= >>>> for them.=0A= >>>>=0A= >>>>=0A= >>>>=0A= >>>>=0A= >>>>=0A= >>>>=0A= >>>> ________________________________________=0A= >>>> From: owner-freebsd-net@freebsd.org on= behalf of Youssef GHORBAL =0A= >>>> Sent: Saturday, March 27, 2021 6:57 PM=0A= >>>> To: Jason Breitman=0A= >>>> Cc: Rick Macklem; freebsd-net@freebsd.org=0A= >>>> Subject: Re: NFS Mount Hangs=0A= >>>>=0A= >>>> CAUTION: This email originated from outside of the University of Guelp= h. Do not click links or open attachments unless you recognize the sender a= nd know the content is safe. If in doubt, forward suspicious emails to IThe= lp@uoguelph.ca=0A= >>>>=0A= >>>>=0A= >>>>=0A= >>>>=0A= >>>> On 27 Mar 2021, at 13:20, Jason Breitman > wrote:=0A= >>>>=0A= >>>> The issue happened again so we can say that disabling TSO and LRO on t= he NIC did not resolve this issue.=0A= >>>> # ifconfig lagg0 -rxcsum -rxcsum6 -txcsum -txcsum6 -lro -tso -vlanhwts= o=0A= >>>> # ifconfig lagg0=0A= >>>> lagg0: flags=3D8943 me= tric 0 mtu 1500=0A= >>>> options=3D8100b8=0A= >>>>=0A= >>>> We can also say that the sysctl settings did not resolve this issue.= =0A= >>>>=0A= >>>> # sysctl net.inet.tcp.fast_finwait2_recycle=3D1=0A= >>>> net.inet.tcp.fast_finwait2_recycle: 0 -> 1=0A= >>>>=0A= >>>> # sysctl net.inet.tcp.finwait2_timeout=3D1000=0A= >>>> net.inet.tcp.finwait2_timeout: 60000 -> 1000=0A= >>>>=0A= >>>> I don=92t think those will do anything in your case since the FIN_WAIT= 2 are on the client side and those sysctls are for BSD.=0A= >>>> By the way it seems that Linux recycles automatically TCP sessions in = FIN_WAIT2 after 60 seconds (sysctl net.ipv4.tcp_fin_timeout)=0A= >>>>=0A= >>>> tcp_fin_timeout (integer; default: 60; since Linux 2.2)=0A= >>>> This specifies how many seconds to wait for a final FIN=0A= >>>> packet before the socket is forcibly closed. This is=0A= >>>> strictly a violation of the TCP specification, but=0A= >>>> required to prevent denial-of-service attacks. In Linux=0A= >>>> 2.2, the default value was 180.=0A= >>>>=0A= >>>> So I don=92t get why it stucks in the FIN_WAIT2 state anyway.=0A= >>>>=0A= >>>> You really need to have a packet capture during the outage (client and= server side) so you=92ll get over the wire chat and start speculating from= there.=0A= >>>> No need to capture the beginning of the outage for now. All you have t= o do, is run a tcpdump for 10 minutes or so when you notice a client stuck.= =0A= >>>>=0A= >>>> * I have not rebooted the NFS Server nor have I restarted nfsd, but do= not believe that is required as these settings are at the TCP level and I = would expect new sessions to use the updated settings.=0A= >>>>=0A= >>>> The issue occurred after 5 days following a reboot of the client machi= nes.=0A= >>>> I ran the capture information again to make use of the situation.=0A= >>>>=0A= >>>> #!/bin/sh=0A= >>>>=0A= >>>> while true=0A= >>>> do=0A= >>>> /bin/date >> /tmp/nfs-hang.log=0A= >>>> /bin/ps axHl | grep nfsd | grep -v grep >> /tmp/nfs-hang.log=0A= >>>> /usr/bin/procstat -kk 2947 >> /tmp/nfs-hang.log=0A= >>>> /usr/bin/procstat -kk 2944 >> /tmp/nfs-hang.log=0A= >>>> /bin/sleep 60=0A= >>>> done=0A= >>>>=0A= >>>>=0A= >>>> On the NFS Server=0A= >>>> Active Internet connections (including servers)=0A= >>>> Proto Recv-Q Send-Q Local Address Foreign Address (sta= te)=0A= >>>> tcp4 0 0 NFS.Server.IP.X.2049 NFS.Client.IP.X.48286 = CLOSE_WAIT=0A= >>>>=0A= >>>> On the NFS Client=0A= >>>> tcp 0 0 NFS.Client.IP.X:48286 NFS.Server.IP.X:2049 = FIN_WAIT2=0A= >>>>=0A= >>>>=0A= >>>>=0A= >>>> You had also asked for the output below.=0A= >>>>=0A= >>>> # nfsstat -E -s=0A= >>>> BackChannelCtBindConnToSes=0A= >>>> 0 0=0A= >>>>=0A= >>>> # sysctl vfs.nfsd.request_space_throttle_count=0A= >>>> vfs.nfsd.request_space_throttle_count: 0=0A= >>>>=0A= >>>> I see that you are testing a patch and I look forward to seeing the re= sults.=0A= >>>>=0A= >>>>=0A= >>>> Jason Breitman=0A= >>>>=0A= >>>>=0A= >>>> On Mar 21, 2021, at 6:21 PM, Rick Macklem > wrote:=0A= >>>>=0A= >>>> Youssef GHORBAL > wrote:=0A= >>>>> Hi Jason,=0A= >>>>>=0A= >>>>>> On 17 Mar 2021, at 18:17, Jason Breitman > wrote:=0A= >>>>>>=0A= >>>>>> Please review the details below and let me know if there is a settin= g that I should apply to my FreeBSD NFS Server or if there is a bug fix tha= t I can apply to resolve my issue.=0A= >>>>>> I shared this information with the linux-nfs mailing list and they b= elieve the issue is on the server side.=0A= >>>>>>=0A= >>>>>> Issue=0A= >>>>>> NFSv4 mounts periodically hang on the NFS Client.=0A= >>>>>>=0A= >>>>>> During this time, it is possible to manually mount from another NFS = Server on the NFS Client having issues.=0A= >>>>>> Also, other NFS Clients are successfully mounting from the NFS Serve= r in question.=0A= >>>>>> Rebooting the NFS Client appears to be the only solution.=0A= >>>>>=0A= >>>>> I had experienced a similar weird situation with periodically stuck L= inux NFS clients >mounting Isilon NFS servers (Isilon is FreeBSD based but = they seem to have there >own nfsd)=0A= >>>> Yes, my understanding is that Isilon uses a proprietary user space nfs= d and=0A= >>>> not the kernel based RPC and nfsd in FreeBSD.=0A= >>>>=0A= >>>>> We=92ve had better luck and we did manage to have packet captures on = both sides >during the issue. The gist of it goes like follows:=0A= >>>>>=0A= >>>>> - Data flows correctly between SERVER and the CLIENT=0A= >>>>> - At some point SERVER starts decreasing it's TCP Receive Window unti= l it reachs 0=0A= >>>>> - The client (eager to send data) can only ack data sent by SERVER.= =0A= >>>>> - When SERVER was done sending data, the client starts sending TCP Wi= ndow >Probes hoping that the TCP Window opens again so he can flush its buf= fers.=0A= >>>>> - SERVER responds with a TCP Zero Window to those probes.=0A= >>>> Having the window size drop to zero is not necessarily incorrect.=0A= >>>> If the server is overloaded (has a backlog of NFS requests), it can st= op doing=0A= >>>> soreceive() on the socket (so the socket rcv buffer can fill up and th= e TCP window=0A= >>>> closes). This results in "backpressure" to stop the NFS client from fl= ooding the=0A= >>>> NFS server with requests.=0A= >>>> --> However, once the backlog is handled, the nfsd should start to sor= eceive()=0A= >>>> again and this shouls cause the window to open back up.=0A= >>>> --> Maybe this is broken in the socket/TCP code. I quickly got lost in= =0A= >>>> tcp_output() when it decides what to do about the rcvwin.=0A= >>>>=0A= >>>>> - After 6 minutes (the NFS server default Idle timeout) SERVER racefu= lly closes the >TCP connection sending a FIN Packet (and still a TCP Window= 0)=0A= >>>> This probably does not happen for Jason's case, since the 6minute time= out=0A= >>>> is disabled when the TCP connection is assigned as a backchannel (most= likely=0A= >>>> the case for NFSv4.1).=0A= >>>>=0A= >>>>> - CLIENT ACK that FIN.=0A= >>>>> - SERVER goes in FIN_WAIT_2 state=0A= >>>>> - CLIENT closes its half part part of the socket and goes in LAST_ACK= state.=0A= >>>>> - FIN is never sent by the client since there still data in its SendQ= and receiver TCP >Window is still 0. At this stage the client starts sendi= ng TCP Window Probes again >and again hoping that the server opens its TCP = Window so it can flush it's buffers >and terminate its side of the socket.= =0A= >>>>> - SERVER keeps responding with a TCP Zero Window to those probes.=0A= >>>>> =3D> The last two steps goes on and on for hours/days freezing the NF= S mount bound >to that TCP session.=0A= >>>>>=0A= >>>>> If we had a situation where CLIENT was responsible for closing the TC= P Window (and >initiating the TCP FIN first) and server wanting to send dat= a we=92ll end up in the same >state as you I think.=0A= >>>>>=0A= >>>>> We=92ve never had the root cause of why the SERVER decided to close t= he TCP >Window and no more acccept data, the fix on the Isilon part was to = recycle more >aggressively the FIN_WAIT_2 sockets (net.inet.tcp.fast_finwai= t2_recycle=3D1 & >net.inet.tcp.finwait2_timeout=3D5000). Once the socket re= cycled and at the next >occurence of CLIENT TCP Window probe, SERVER sends = a RST, triggering the >teardown of the session on the client side, a new TC= P handchake, etc and traffic >flows again (NFS starts responding)=0A= >>>>>=0A= >>>>> To avoid rebooting the client (and before the aggressive FIN_WAIT_2 w= as >implemented on the Isilon side) we=92ve added a check script on the cli= ent that detects >LAST_ACK sockets on the client and through iptables rule = enforces a TCP RST, >Something like: -A OUTPUT -p tcp -d $nfs_server_addr -= -sport $local_port -j REJECT >--reject-with tcp-reset (the script removes t= his iptables rule as soon as the LAST_ACK >disappears)=0A= >>>>>=0A= >>>>> The bottom line would be to have a packet capture during the outage (= client and/or >server side), it will show you at least the shape of the TCP= exchange when NFS is >stuck.=0A= >>>> Interesting story and good work w.r.t. sluething, Youssef, thanks.=0A= >>>>=0A= >>>> I looked at Jason's log and it shows everything is ok w.r.t the nfsd t= hreads.=0A= >>>> (They're just waiting for RPC requests.)=0A= >>>> However, I do now think I know why the soclose() does not happen.=0A= >>>> When the TCP connection is assigned as a backchannel, that takes a ref= erence=0A= >>>> cnt on the structure. This refcnt won't be released until the connecti= on is=0A= >>>> replaced by a BindConnectiotoSession operation from the client. But th= at won't=0A= >>>> happen until the client creates a new TCP connection.=0A= >>>> --> No refcnt release-->no refcnt of 0-->no soclose().=0A= >>>>=0A= >>>> I've created the attached patch (completely different from the previou= s one)=0A= >>>> that adds soshutdown(SHUT_WR) calls in the three places where the TCP= =0A= >>>> connection is going away. This seems to get it past CLOSE_WAIT without= a=0A= >>>> soclose().=0A= >>>> --> I know you are not comfortable with patching your server, but I do= think=0A= >>>> this change will get the socket shutdown to complete.=0A= >>>>=0A= >>>> There are a couple more things you can check on the server...=0A= >>>> # nfsstat -E -s=0A= >>>> --> Look for the count under "BindConnToSes".=0A= >>>> --> If non-zero, backchannels have been assigned=0A= >>>> # sysctl -a | fgrep request_space_throttle_count=0A= >>>> --> If non-zero, the server has been overloaded at some point.=0A= >>>>=0A= >>>> I think the attached patch might work around the problem.=0A= >>>> The code that should open up the receive window needs to be checked.= =0A= >>>> I am also looking at enabling the 6minute timeout when a backchannel i= s=0A= >>>> assigned.=0A= >>>>=0A= >>>> rick=0A= >>>>=0A= >>>> Youssef=0A= >>>>=0A= >>>> _______________________________________________=0A= >>>> freebsd-net@freebsd.org mailing list= =0A= >>>> https://urldefense.com/v3/__https://lists.freebsd.org/mailman/listinfo= /freebsd-net__;!!JFdNOqOXpB6UZW0!_c2MFNbir59GXudWPVdE5bNBm-qqjXeBuJ2UEmFv5O= ZciLj4ObR_drJNv5yryaERfIbhKR2d$=0A= >>>> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org<= mailto:freebsd-net-unsubscribe@freebsd.org>"=0A= >>>> =0A= >>>>=0A= >>>> =0A= >>>>=0A= >>>> _______________________________________________=0A= >>>> freebsd-net@freebsd.org mailing list=0A= >>>> https://lists.freebsd.org/mailman/listinfo/freebsd-net=0A= >>>> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"= =0A= >>>> _______________________________________________=0A= >>>> freebsd-net@freebsd.org mailing list=0A= >>>> https://lists.freebsd.org/mailman/listinfo/freebsd-net=0A= >>>> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"= =0A= >>>=0A= >>> _______________________________________________=0A= >>> freebsd-net@freebsd.org mailing list=0A= >>> https://lists.freebsd.org/mailman/listinfo/freebsd-net=0A= >>> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"= =0A= >>> _______________________________________________=0A= >>> freebsd-net@freebsd.org mailing list=0A= >>> https://lists.freebsd.org/mailman/listinfo/freebsd-net=0A= >>> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"= =0A= >>=0A= >=0A= =0A= _______________________________________________=0A= freebsd-net@freebsd.org mailing list=0A= https://lists.freebsd.org/mailman/listinfo/freebsd-net=0A= To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"=0A=