From owner-freebsd-net@freebsd.org Fri Apr 2 21:31:05 2021 Return-Path: Delivered-To: freebsd-net@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 901375B2CE8 for ; Fri, 2 Apr 2021 21:31:05 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from CAN01-QB1-obe.outbound.protection.outlook.com (mail-qb1can01on0620.outbound.protection.outlook.com [IPv6:2a01:111:f400:fe5c::620]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mail.protection.outlook.com", Issuer "DigiCert Cloud Services CA-1" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4FBtX83YqDz3wV3; Fri, 2 Apr 2021 21:31:04 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=h6dJIsmi1i/7U1T6gxK34afY8S+nNabWbqKsFnYz+xfI0rZMyawTLbcZphwCoJLNo1t25lllLPX8MBVi8IOumJaO6b3603Y6IaM1LOIa/5aKgP50/nsfXdhESPbwUyCQIdAZakcpxRQeq4+hI1xNg0ebnc2+CPVwfwSGSM/zGzavht6TdSuAp0MLN0EykWU0xYNZt+6DqJoEJBD0zsAB2Xe7IwDemHHK191Do7J1+ppr+9rIna2KMdYfDoIYw7DzuoKslFaN7QGVmkmrgPHF05KUwE7tEW3uYzkcmwwl0R0E8a/m6QOstkbrgOyE9nYRbYDebWBJtfaTnGIpAZAhzQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=aKmdNB3kiKmvvUBAeXaxphyYuegcmnLICtDqxAZfkMw=; b=B8UolJ8P6nO8Ohy5rcYEB/YBThsb4yjYfiFIYlgpuWZLRabQnXK1JdMMgNLNE+n82fn4y5KiE8KdLsMFC9BBp8RM8TELDAuEOnFhDjI0YBWTFycjyKUe9knPWsCUwStpHmZILZXDN/jTxGAHUbIuyPJ4CljxUhTmScNldVtWNhA/iP+tlJBRsHHxgFAWO+YTNAUaJd8x/mmxDG9TJpx7VdMN9bW5Hf/Yh1RNj5t0OgZwu9G0AufIcbav9bfqRMQx8Yf7qJ51sJA6jo5KdHOtebHfebNKAk2eSSjaSfwz562Mi9jzf6ihCNrxidVHMkAD3ZKZabUtWw8S1ktl15Oa5A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=uoguelph.ca; dmarc=pass action=none header.from=uoguelph.ca; dkim=pass header.d=uoguelph.ca; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=uoguelph.ca; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=aKmdNB3kiKmvvUBAeXaxphyYuegcmnLICtDqxAZfkMw=; b=XwVV9aW83neoIlcvtc03hQetkatggDUask0LGJCmGHVdBZ5CHvb0I8QsWclfF+ZGoGDm+ZcUl4Tj6/v3ZQdUxRTyEIbIjD8WXqFDB4wgyGhnvHaGOe8pi4l94YnshrD2pzC5qC9Ioh55lWf4dC25wkOFA9mMUVMEQegav/0E18aBdjQtRJugRlm9lXC2SE/F83cyxq0GSnjNq6edEusdQtnvSsvuIcBuNNFl/Ssv808gFF3OTNUgn+Ft++wd3pAwIJRJ3xwdKx58wlykk+/v/oxD1I1k2eCRB/fVYITISb/z82f+5bvylKSb1Qq4q/d2JVTEgcnfDhhC813CQeH+YQ== Received: from YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM (2603:10b6:c00:19::29) by YQXPR01MB3046.CANPRD01.PROD.OUTLOOK.COM (2603:10b6:c00:47::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3977.33; Fri, 2 Apr 2021 21:31:02 +0000 Received: from YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM ([fe80::1c05:585a:132a:f08e]) by YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM ([fe80::1c05:585a:132a:f08e%4]) with mapi id 15.20.3977.037; Fri, 2 Apr 2021 21:31:02 +0000 From: Rick Macklem To: "tuexen@freebsd.org" CC: Youssef GHORBAL , Jason Breitman , "freebsd-net@freebsd.org" Subject: Re: NFS Mount Hangs Thread-Topic: NFS Mount Hangs Thread-Index: AQHXG1G2D7AHBwtmAkS1jBAqNNo2I6qMDIgAgALy8kyACNDugIAAsfOAgAfoFLeAARWpAIAAUOsE Date: Fri, 2 Apr 2021 21:31:01 +0000 Message-ID: References: <3750001D-3F1C-4D9A-A9D9-98BCA6CA65A4@tildenparkcapital.com> <33693DE3-7FF8-4FAB-9A75-75576B88A566@tildenparkcapital.com> <8E745920-1092-4312-B251-B49D11FE8028@pasteur.fr> , In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: e6f0764e-0586-4780-57a5-08d8f61e9d59 x-ms-traffictypediagnostic: YQXPR01MB3046: x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:324; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: ebWf0upPzqW/0OYtsGhhAvci/EHZzOZFuISdY4fUOP0G3lx99oPL8q+J+jDA4/TTv/+nhwaTVT/QnbVT8qxsTdZ/zU2qUR7yR5Gy8FFReVjXg+zby8gYmGi+Y0a2b6wMLz1O2LlGOn9VgbxuyqSQFmDyymsR6lXrfAjxLJMc8Pu98tMUI3+MKw30M6erQrB8PhgV5WeZlJw3ATl0mJ8/hBBWIZJYYyfAI+/f4zueBpk2kvHpMUNgUSUab2ZB0ygxFn8Fv+fWLTb8eklrTWuVA/oFezaP8/GS/4nSMQ0BGDSs3eTAkDfzaiA6JO+8wwayc4hDTphs9qZatBQE7gUNgaOFqOrbjOEGuExltyMfbu7keG+e7um8J0IROelNN7Bz8WVuSj3bvfuUg80IECh5dZaN1P6VX8maVM8rmJoTbJF3G32Vex5CLDtcTI+qGvG+oz+0U4f1pkxgOP7TDN34J38VE96VabszRwm/I86ZoPlDXBfrHo7vI8CAcE0+BeHI98SR7hUVCXPK+sxFa3mXFyXHuIkYsvzjAnMUOL/ulXPHRFDZ96+bjj9dnZWh9YzKBuE4ZHEoKSBhGKkM+46fQ2b8qTVcnzyKSE9srG2BipdpvrYKkHBK4ecWnERcUq4iYeKM0b4y+A0HIEjIIrf3e1ra7U/OPrY4JLprghsn9w68c9QXf7Jh7tz9tYyZ0gETtN1cBEa5jgVcFxrwEiwIYz347dsSOEfZU4rFWqH6kvU= x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM; PTR:; CAT:NONE; SFS:(396003)(136003)(39850400004)(346002)(366004)(376002)(478600001)(66574015)(71200400001)(966005)(52536014)(66476007)(2906002)(83380400001)(6506007)(53546011)(3480700007)(66446008)(64756008)(316002)(86362001)(9686003)(786003)(55016002)(30864003)(5660300002)(66556008)(7696005)(8676002)(54906003)(6916009)(66946007)(4326008)(38100700001)(186003)(76116006)(7116003)(8936002)(33656002)(91956017); DIR:OUT; SFP:1101; x-ms-exchange-antispam-messagedata: =?Windows-1252?Q?t9H45AhhzVfyaeVmcjcoKQalLOShmMtcfHbqZfa1f1HLLynub6EPwgMb?= =?Windows-1252?Q?jkDVyGiVufyPKpbVjwAjjM0owGpfnJuo5PQOYQKHCzFEePhTWYK6QDvq?= =?Windows-1252?Q?4Yqxhy1zoUsZxA79ZMsCpHwMl9o+6VQfVAT/wCvNQZ2pV71l8WpjGI3R?= =?Windows-1252?Q?WjWZlJ0nn8QXcUZDbdJQvb2GMCEeD2Q3tbmwIx3/FRxdjMHW7zt37gnz?= =?Windows-1252?Q?EbxpGrcNQGuVr3D2rcstT6C9/Bd/DnqDTJRPkC2N0vD98RCd8/NTO3TV?= =?Windows-1252?Q?NHpQnUZrMbZLbh+Kns4XcgfXzsT1xqR14BMeTxayVBAMYrvYRkkXU8uR?= =?Windows-1252?Q?+xTMbdua6iNLu0sP2cXGlL6U+j/FHxWQFLyXqGqwkHb/VPV0BNMfCv5u?= =?Windows-1252?Q?DG6Ly+t1ao+UyM8kUW5+3CQq/682/67nrnMvA0JGqe6CQYh7ErrthYMr?= =?Windows-1252?Q?9hXWSbyTDgmLon4tqkgy3uoUh/nhbHPSxd+8PBQfVRAz+9CYMXBoIYzi?= =?Windows-1252?Q?/iAGGb40MrBoh7a6G8lMEsOVXLG9vv3lba3VSj6jj7NkmtMBpJS93PZu?= =?Windows-1252?Q?QYW2J1LHrHlgvLD7+cJYcPqzuNCqv1VNaroisr1FoHdb4W2e23FmPAtg?= =?Windows-1252?Q?8281ZGFG28DzDIJqLy44b+7nXSLI+ZixxfkDq/Jagra1jbQyjEJ5F/mr?= =?Windows-1252?Q?6Ugp99QXrwKB2kFHv10cZvFLq7mQ3t0rZ5+IRej1xAg8cadqpW01zwkY?= =?Windows-1252?Q?C4rU9DW2K4thq1FVgXpUpc+AGjseZs9mcXwOzO0na5lDvoNP1N2CBWgQ?= =?Windows-1252?Q?9JTemF1nTU0MlAfhrMRTtgPL/YD0HEnUYohy4yyLT19s4ejouwhiJg8T?= =?Windows-1252?Q?od1MonlRGfa0k3g2wPh7M+DFdfTVAtNFlq/e/3gyE06u8W84Dv4zzQgF?= =?Windows-1252?Q?iN5C1YLyjdNsBfiVPZRyU3lU4e8hcdq5I/MHirVUgHTKQHOVXqMV42j2?= =?Windows-1252?Q?UHEImTRRUinOTpQ50n/2nJrx2t9BOIuvFqILyZYyPdTbH+53HdNaToI6?= =?Windows-1252?Q?m8vHQEZXb5pG9C1jLsH5AJ0IoBkHUlUlKGYp+iGJdZxZM5gNAPFQURMl?= =?Windows-1252?Q?ZYdqBXCIfGPSpfOt5eOUk806F/4q0QBTy4T3YYZB7XozkcYiw8AWEioX?= =?Windows-1252?Q?CCku2x/2Wxa/oJwL9KxDFh0bwgjcHDYZcb1gPZlvn6EvarfAbAw8LMZA?= =?Windows-1252?Q?7OOMoWfAk/LBusAhxw8il9Wf5afjAcDeVPymgiyARXAMH8Ies12h3YQi?= =?Windows-1252?Q?E7nvrkOyfMBFIHb1/rvjIukaQxCNNSmumUZ96aP7vHOz46L/69UOlHeS?= =?Windows-1252?Q?KNtd7spiglHu2NUo+sHLDoc7CEcs+eMxcY3shxP9uCLZyZxGnY+mjQun?= =?Windows-1252?Q?J92JjteRuDqISGFQShTAdoH70KVOP4WqSH6weWXzVEE=3D?= x-ms-exchange-transport-forked: True Content-Type: text/plain; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: uoguelph.ca X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM X-MS-Exchange-CrossTenant-Network-Message-Id: e6f0764e-0586-4780-57a5-08d8f61e9d59 X-MS-Exchange-CrossTenant-originalarrivaltime: 02 Apr 2021 21:31:02.1545 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: be62a12b-2cad-49a1-a5fa-85f4f3156a7d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: OcfY/TqBnzaRrQpKnic8UcI0fAmTx/3N3KqbiFSAYXRMjF1qthSoRM70BNOQQpKO3BldadxNOboDXW40bns7qQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: YQXPR01MB3046 X-Rspamd-Queue-Id: 4FBtX83YqDz3wV3 X-Spamd-Bar: ------ Authentication-Results: mx1.freebsd.org; dkim=pass header.d=uoguelph.ca header.s=selector1 header.b=XwVV9aW8; arc=pass (microsoft.com:s=arcselector9901:i=1); dmarc=pass (policy=none) header.from=uoguelph.ca; spf=pass (mx1.freebsd.org: domain of rmacklem@uoguelph.ca designates 2a01:111:f400:fe5c::620 as permitted sender) smtp.mailfrom=rmacklem@uoguelph.ca X-Spamd-Result: default: False [-6.00 / 15.00]; TO_DN_EQ_ADDR_SOME(0.00)[]; RBL_DBL_DONT_QUERY_IPS(0.00)[2a01:111:f400:fe5c::620:from]; R_DKIM_ALLOW(-0.20)[uoguelph.ca:s=selector1]; FREEFALL_USER(0.00)[rmacklem]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[4]; TO_DN_SOME(0.00)[]; R_SPF_ALLOW(-0.20)[+ip6:2a01:111:f400::/48]; MIME_GOOD(-0.10)[text/plain]; NEURAL_HAM_LONG(-1.00)[-1.000]; SPAMHAUS_ZRD(0.00)[2a01:111:f400:fe5c::620:from:127.0.2.255]; RCVD_COUNT_THREE(0.00)[3]; TO_MATCH_ENVRCPT_SOME(0.00)[]; DWL_DNSWL_LOW(-1.00)[uoguelph.ca:dkim]; DKIM_TRACE(0.00)[uoguelph.ca:+]; DMARC_POLICY_ALLOW(-0.50)[uoguelph.ca,none]; NEURAL_HAM_SHORT(-1.00)[-1.000]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+]; RCVD_TLS_LAST(0.00)[]; ASN(0.00)[asn:8075, ipnet:2a01:111:f000::/36, country:US]; ARC_ALLOW(-1.00)[microsoft.com:s=arcselector9901:i=1]; MAILMAN_DEST(0.00)[freebsd-net] X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 02 Apr 2021 21:31:05 -0000 tuexen@freebsd.org wrote:=0A= >> On 2. Apr 2021, at 02:07, Rick Macklem wrote:=0A= >>=0A= >> I hope you don't mind a top post...=0A= >> I've been testing network partitioning between the only Linux client=0A= >> I have (5.2 kernel) and a FreeBSD server with the xprtdied.patch=0A= >> (does soshutdown(..SHUT_WR) when it knows the socket is broken)=0A= >> applied to it.=0A= >>=0A= >> I'm not enough of a TCP guy to know if this is useful, but here's what= =0A= >> I see...=0A= >>=0A= >> While partitioned:=0A= >> On the FreeBSD server end, the socket either goes to CLOSED during=0A= >> the network partition or stays ESTABLISHED.=0A= >If it goes to CLOSED you called shutdown(, SHUT_WR) and the peer also=0A= >sent a FIN, but you never called close() on the socket.=0A= >If the socket stays in ESTABLISHED, there is no communication ongoing,=0A= >I guess, and therefore the server does not even detect that the peer=0A= >is not reachable.=0A= >> On the Linux end, the socket seems to remain ESTABLISHED for a=0A= >> little while, and then disappears.=0A= >So how does Linux detect the peer is not reachable?=0A= Well, here's what I see in a packet capture in the Linux client once=0A= I partition it (just unplug the net cable):=0A= - lots of retransmits of the same segment (with ACK) for 54sec=0A= - then only ARP queries=0A= =0A= Once I plug the net cable back in:=0A= - ARP works=0A= - one more retransmit of the same segement=0A= - receives RST from FreeBSD=0A= ** So, is this now a "new" TCP connection, despite=0A= using the same port#.=0A= --> It matters for NFS, since "new connection"=0A= implies "must retry all outstanding RPCs".=0A= - sends SYN=0A= - receives SYN, ACK from FreeBSD=0A= --> connection starts working again=0A= Always uses same port#.=0A= =0A= On the FreeBSD server end:=0A= - receives the last retransmit of the segment (with ACK)=0A= - sends RST=0A= - receives SYN=0A= - sends SYN, ACK=0A= =0A= I thought that there was no RST in the capture I looked at=0A= yesterday, so I'm not sure if FreeBSD always sends an RST,=0A= but the Linux client behaviour was the same. (Sent a SYN, etc).=0A= The socket disappears from the Linux "netstat -a" and I=0A= suspect that happens after about 54sec, but I am not sure=0A= about the timing.=0A= =0A= >>=0A= >> After unpartitioning:=0A= >> On the FreeBSD server end, you get another socket showing up at=0A= >> the same port#=0A= >> Active Internet connections (including servers)=0A= >> Proto Recv-Q Send-Q Local Address Foreign Address (state= )=0A= >> tcp4 0 0 nfsv4-new3.nfsd nfsv4-linux.678 ESTABL= ISHED=0A= >> tcp4 0 0 nfsv4-new3.nfsd nfsv4-linux.678 CLOSED= =0A= >>=0A= >> The Linux client shows the same connection ESTABLISHED.=0A= But disappears from "netstat -a" for a while during the partitioning.=0A= =0A= >> (The mount sometimes reports an error. I haven't looked at packet=0A= >> traces to see if it retries RPCs or why the errors occur.)=0A= I have now done so, as above.=0A= =0A= >> --> However I never get hangs.=0A= >> Sometimes it goes to SYN_SENT for a while and the FreeBSD server=0A= >> shows FIN_WAIT_1, but then both ends go to ESTABLISHED and the=0A= >> mount starts working again.=0A= >>=0A= >> The most obvious thing is that the Linux client always keeps using=0A= >> the same port#. (The FreeBSD client will use a different port# when=0A= >> it does a TCP reconnect after no response from the NFS server for=0A= >> a little while.)=0A= >>=0A= >> What do those TCP conversant think?=0A= >I guess you are you are never calling close() on the socket, for with=0A= >the connection state is CLOSED.=0A= Ok, that makes sense. For this case the Linux client has not done a=0A= BindConnectionToSession to re-assign the back channel.=0A= I'll have to bug them about this. However, I'll bet they'll answer=0A= that I have to tell them the back channel needs re-assignment=0A= or something like that.=0A= =0A= I am pretty certain they are broken, in that the client needs to=0A= retry all outstanding RPCs.=0A= =0A= For others, here's the long winded version of this that I just=0A= put on the phabricator review:=0A= In the server side kernel RPC, the socket (struct socket *) is in a=0A= structure called SVCXPRT (normally pointed to by "xprt").=0A= These structures a ref counted and the soclose() is done=0A= when the ref. cnt goes to zero. My understanding is that=0A= "struct socket *" is free'd by soclose() so this cannot be done=0A= before the xprt ref. cnt goes to zero.=0A= =0A= For NFSv4.1/4.2 there is something called a back channel=0A= which means that a "xprt" is used for server->client RPCs,=0A= although the TCP connection is established by the client=0A= to the server.=0A= --> This back channel holds a ref cnt on "xprt" until the=0A= =0A= client re-assigns it to a different TCP connection=0A= via an operation called BindConnectionToSession=0A= and the Linux client is not doing this soon enough,=0A= it appears.=0A= =0A= So, the soclose() is delayed, which is why I think the=0A= TCP connection gets stuck in CLOSE_WAIT and that is=0A= why I've added the soshutdown(..SHUT_WR) calls,=0A= which can happen before the client gets around to=0A= re-assigning the back channel.=0A= =0A= Thanks for your help with this Michael, rick=0A= =0A= Best regards=0A= Michael=0A= >=0A= > rick=0A= > ps: I can capture packets while doing this, if anyone has a use=0A= > for them.=0A= >=0A= >=0A= >=0A= >=0A= >=0A= >=0A= > ________________________________________=0A= > From: owner-freebsd-net@freebsd.org on be= half of Youssef GHORBAL =0A= > Sent: Saturday, March 27, 2021 6:57 PM=0A= > To: Jason Breitman=0A= > Cc: Rick Macklem; freebsd-net@freebsd.org=0A= > Subject: Re: NFS Mount Hangs=0A= >=0A= > CAUTION: This email originated from outside of the University of Guelph. = Do not click links or open attachments unless you recognize the sender and = know the content is safe. If in doubt, forward suspicious emails to IThelp@= uoguelph.ca=0A= >=0A= >=0A= >=0A= >=0A= > On 27 Mar 2021, at 13:20, Jason Breitman > wrote:=0A= >=0A= > The issue happened again so we can say that disabling TSO and LRO on the = NIC did not resolve this issue.=0A= > # ifconfig lagg0 -rxcsum -rxcsum6 -txcsum -txcsum6 -lro -tso -vlanhwtso= =0A= > # ifconfig lagg0=0A= > lagg0: flags=3D8943 metri= c 0 mtu 1500=0A= > options=3D8100b8=0A= >=0A= > We can also say that the sysctl settings did not resolve this issue.=0A= >=0A= > # sysctl net.inet.tcp.fast_finwait2_recycle=3D1=0A= > net.inet.tcp.fast_finwait2_recycle: 0 -> 1=0A= >=0A= > # sysctl net.inet.tcp.finwait2_timeout=3D1000=0A= > net.inet.tcp.finwait2_timeout: 60000 -> 1000=0A= >=0A= > I don=92t think those will do anything in your case since the FIN_WAIT2 a= re on the client side and those sysctls are for BSD.=0A= > By the way it seems that Linux recycles automatically TCP sessions in FIN= _WAIT2 after 60 seconds (sysctl net.ipv4.tcp_fin_timeout)=0A= >=0A= > tcp_fin_timeout (integer; default: 60; since Linux 2.2)=0A= > This specifies how many seconds to wait for a final FIN=0A= > packet before the socket is forcibly closed. This is=0A= > strictly a violation of the TCP specification, but=0A= > required to prevent denial-of-service attacks. In Linux=0A= > 2.2, the default value was 180.=0A= >=0A= > So I don=92t get why it stucks in the FIN_WAIT2 state anyway.=0A= >=0A= > You really need to have a packet capture during the outage (client and se= rver side) so you=92ll get over the wire chat and start speculating from th= ere.=0A= > No need to capture the beginning of the outage for now. All you have to d= o, is run a tcpdump for 10 minutes or so when you notice a client stuck.=0A= >=0A= > * I have not rebooted the NFS Server nor have I restarted nfsd, but do no= t believe that is required as these settings are at the TCP level and I wou= ld expect new sessions to use the updated settings.=0A= >=0A= > The issue occurred after 5 days following a reboot of the client machines= .=0A= > I ran the capture information again to make use of the situation.=0A= >=0A= > #!/bin/sh=0A= >=0A= > while true=0A= > do=0A= > /bin/date >> /tmp/nfs-hang.log=0A= > /bin/ps axHl | grep nfsd | grep -v grep >> /tmp/nfs-hang.log=0A= > /usr/bin/procstat -kk 2947 >> /tmp/nfs-hang.log=0A= > /usr/bin/procstat -kk 2944 >> /tmp/nfs-hang.log=0A= > /bin/sleep 60=0A= > done=0A= >=0A= >=0A= > On the NFS Server=0A= > Active Internet connections (including servers)=0A= > Proto Recv-Q Send-Q Local Address Foreign Address (state)= =0A= > tcp4 0 0 NFS.Server.IP.X.2049 NFS.Client.IP.X.48286 C= LOSE_WAIT=0A= >=0A= > On the NFS Client=0A= > tcp 0 0 NFS.Client.IP.X:48286 NFS.Server.IP.X:2049 = FIN_WAIT2=0A= >=0A= >=0A= >=0A= > You had also asked for the output below.=0A= >=0A= > # nfsstat -E -s=0A= > BackChannelCtBindConnToSes=0A= > 0 0=0A= >=0A= > # sysctl vfs.nfsd.request_space_throttle_count=0A= > vfs.nfsd.request_space_throttle_count: 0=0A= >=0A= > I see that you are testing a patch and I look forward to seeing the resul= ts.=0A= >=0A= >=0A= > Jason Breitman=0A= >=0A= >=0A= > On Mar 21, 2021, at 6:21 PM, Rick Macklem > wrote:=0A= >=0A= > Youssef GHORBAL > wrote:=0A= >> Hi Jason,=0A= >>=0A= >>> On 17 Mar 2021, at 18:17, Jason Breitman > wrote:=0A= >>>=0A= >>> Please review the details below and let me know if there is a setting t= hat I should apply to my FreeBSD NFS Server or if there is a bug fix that I= can apply to resolve my issue.=0A= >>> I shared this information with the linux-nfs mailing list and they beli= eve the issue is on the server side.=0A= >>>=0A= >>> Issue=0A= >>> NFSv4 mounts periodically hang on the NFS Client.=0A= >>>=0A= >>> During this time, it is possible to manually mount from another NFS Ser= ver on the NFS Client having issues.=0A= >>> Also, other NFS Clients are successfully mounting from the NFS Server i= n question.=0A= >>> Rebooting the NFS Client appears to be the only solution.=0A= >>=0A= >> I had experienced a similar weird situation with periodically stuck Linu= x NFS clients >mounting Isilon NFS servers (Isilon is FreeBSD based but the= y seem to have there >own nfsd)=0A= > Yes, my understanding is that Isilon uses a proprietary user space nfsd a= nd=0A= > not the kernel based RPC and nfsd in FreeBSD.=0A= >=0A= >> We=92ve had better luck and we did manage to have packet captures on bot= h sides >during the issue. The gist of it goes like follows:=0A= >>=0A= >> - Data flows correctly between SERVER and the CLIENT=0A= >> - At some point SERVER starts decreasing it's TCP Receive Window until i= t reachs 0=0A= >> - The client (eager to send data) can only ack data sent by SERVER.=0A= >> - When SERVER was done sending data, the client starts sending TCP Windo= w >Probes hoping that the TCP Window opens again so he can flush its buffer= s.=0A= >> - SERVER responds with a TCP Zero Window to those probes.=0A= > Having the window size drop to zero is not necessarily incorrect.=0A= > If the server is overloaded (has a backlog of NFS requests), it can stop = doing=0A= > soreceive() on the socket (so the socket rcv buffer can fill up and the T= CP window=0A= > closes). This results in "backpressure" to stop the NFS client from flood= ing the=0A= > NFS server with requests.=0A= > --> However, once the backlog is handled, the nfsd should start to sorece= ive()=0A= > again and this shouls cause the window to open back up.=0A= > --> Maybe this is broken in the socket/TCP code. I quickly got lost in=0A= > tcp_output() when it decides what to do about the rcvwin.=0A= >=0A= >> - After 6 minutes (the NFS server default Idle timeout) SERVER racefully= closes the >TCP connection sending a FIN Packet (and still a TCP Window 0)= =0A= > This probably does not happen for Jason's case, since the 6minute timeout= =0A= > is disabled when the TCP connection is assigned as a backchannel (most li= kely=0A= > the case for NFSv4.1).=0A= >=0A= >> - CLIENT ACK that FIN.=0A= >> - SERVER goes in FIN_WAIT_2 state=0A= >> - CLIENT closes its half part part of the socket and goes in LAST_ACK st= ate.=0A= >> - FIN is never sent by the client since there still data in its SendQ an= d receiver TCP >Window is still 0. At this stage the client starts sending = TCP Window Probes again >and again hoping that the server opens its TCP Win= dow so it can flush it's buffers >and terminate its side of the socket.=0A= >> - SERVER keeps responding with a TCP Zero Window to those probes.=0A= >> =3D> The last two steps goes on and on for hours/days freezing the NFS m= ount bound >to that TCP session.=0A= >>=0A= >> If we had a situation where CLIENT was responsible for closing the TCP W= indow (and >initiating the TCP FIN first) and server wanting to send data w= e=92ll end up in the same >state as you I think.=0A= >>=0A= >> We=92ve never had the root cause of why the SERVER decided to close the = TCP >Window and no more acccept data, the fix on the Isilon part was to rec= ycle more >aggressively the FIN_WAIT_2 sockets (net.inet.tcp.fast_finwait2_= recycle=3D1 & >net.inet.tcp.finwait2_timeout=3D5000). Once the socket recyc= led and at the next >occurence of CLIENT TCP Window probe, SERVER sends a R= ST, triggering the >teardown of the session on the client side, a new TCP h= andchake, etc and traffic >flows again (NFS starts responding)=0A= >>=0A= >> To avoid rebooting the client (and before the aggressive FIN_WAIT_2 was = >implemented on the Isilon side) we=92ve added a check script on the client= that detects >LAST_ACK sockets on the client and through iptables rule enf= orces a TCP RST, >Something like: -A OUTPUT -p tcp -d $nfs_server_addr --sp= ort $local_port -j REJECT >--reject-with tcp-reset (the script removes this= iptables rule as soon as the LAST_ACK >disappears)=0A= >>=0A= >> The bottom line would be to have a packet capture during the outage (cli= ent and/or >server side), it will show you at least the shape of the TCP ex= change when NFS is >stuck.=0A= > Interesting story and good work w.r.t. sluething, Youssef, thanks.=0A= >=0A= > I looked at Jason's log and it shows everything is ok w.r.t the nfsd thre= ads.=0A= > (They're just waiting for RPC requests.)=0A= > However, I do now think I know why the soclose() does not happen.=0A= > When the TCP connection is assigned as a backchannel, that takes a refere= nce=0A= > cnt on the structure. This refcnt won't be released until the connection = is=0A= > replaced by a BindConnectiotoSession operation from the client. But that = won't=0A= > happen until the client creates a new TCP connection.=0A= > --> No refcnt release-->no refcnt of 0-->no soclose().=0A= >=0A= > I've created the attached patch (completely different from the previous o= ne)=0A= > that adds soshutdown(SHUT_WR) calls in the three places where the TCP=0A= > connection is going away. This seems to get it past CLOSE_WAIT without a= =0A= > soclose().=0A= > --> I know you are not comfortable with patching your server, but I do th= ink=0A= > this change will get the socket shutdown to complete.=0A= >=0A= > There are a couple more things you can check on the server...=0A= > # nfsstat -E -s=0A= > --> Look for the count under "BindConnToSes".=0A= > --> If non-zero, backchannels have been assigned=0A= > # sysctl -a | fgrep request_space_throttle_count=0A= > --> If non-zero, the server has been overloaded at some point.=0A= >=0A= > I think the attached patch might work around the problem.=0A= > The code that should open up the receive window needs to be checked.=0A= > I am also looking at enabling the 6minute timeout when a backchannel is= =0A= > assigned.=0A= >=0A= > rick=0A= >=0A= > Youssef=0A= >=0A= > _______________________________________________=0A= > freebsd-net@freebsd.org mailing list=0A= > https://urldefense.com/v3/__https://lists.freebsd.org/mailman/listinfo/fr= eebsd-net__;!!JFdNOqOXpB6UZW0!_c2MFNbir59GXudWPVdE5bNBm-qqjXeBuJ2UEmFv5OZci= Lj4ObR_drJNv5yryaERfIbhKR2d$=0A= > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"=0A= > =0A= >=0A= > =0A= >=0A= > _______________________________________________=0A= > freebsd-net@freebsd.org mailing list=0A= > https://lists.freebsd.org/mailman/listinfo/freebsd-net=0A= > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"=0A= > _______________________________________________=0A= > freebsd-net@freebsd.org mailing list=0A= > https://lists.freebsd.org/mailman/listinfo/freebsd-net=0A= > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"=0A= =0A=