From owner-freebsd-net@freebsd.org Fri Apr 2 00:07:53 2021 Return-Path: Delivered-To: freebsd-net@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 5AD5C5BB73E for ; Fri, 2 Apr 2021 00:07:53 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from CAN01-TO1-obe.outbound.protection.outlook.com (mail-to1can01on0631.outbound.protection.outlook.com [IPv6:2a01:111:f400:fe5d::631]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mail.protection.outlook.com", Issuer "DigiCert Cloud Services CA-1" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4FBL3X2yk8z3HdW; Fri, 2 Apr 2021 00:07:52 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=JGSjpt3JBMAitsaapOE9SdW/Lbaq4NIKxhTSQaSp4mWABCQjN5AfGxPquomX2VHj2gpVhIEwjELmH2h+SRmQQX0VB/LOXGlp4PQtmZmipLUScX/UjFThbsuQ9twNoftAZmb8eiPMDITMM6QqrhowtgrBQxo+KzH9jGQY+Ph7kv5rO1tQQ8pMgVc2oIdwo7NdrQ58g5Hlkf2kJe+Kg7HWkQDKOJXg53I0VVDA8rOZT+xdAlG2bR8qHoC5ZZyoXlHNMVYUXsUi3/DJGd7qfiJC3qQlE7/9yLt4h8eaNYZcDGhEgRpyiGX6kmb3QR6coDtFlzUONF5YR6niKP2e6KrL0Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=hyXcWGTuF9rLP9mi2t4QCjKJr/TxwVA2VHYlWufvatA=; b=lLgbYDQ2cJca78SXWTwsFx6tRcevNT2k4vZ/GU1UULwPsDGFN3oJiepUmy8qXQsbqeHM1PZAbrXLKU3GgSKpKNdoF1/XzsJ/aE8L4vA8YrzL8HtjxBJrR9Ordd3U+OlBf4q12ueUj+VGzUNyEEpGHdD3cjlmrX+j7I7SarXi4JfMjoFjjvK0VWzLzRolV+tbtXqyl07JDGLscAFMOlcJg+21Zzt6Yk2DMPxaUKjUoRRw+ZIjWQz0It8jwGTlk6T3zaNdbHT1ZTo8AcWV6Biu30UQEIuO41SYvD/wqmvT2iegMleTBC0S+QxOZ27Cdw2dlcbeasUL2x+Bm6Cr5jm4MA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=uoguelph.ca; dmarc=pass action=none header.from=uoguelph.ca; dkim=pass header.d=uoguelph.ca; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=uoguelph.ca; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=hyXcWGTuF9rLP9mi2t4QCjKJr/TxwVA2VHYlWufvatA=; b=Ot22348o2LpUtFURi360S106/Qor4MerVMrT+iL35byvZp8ypT1LRM/MrOGtux+iGGBCIxQNp7c3JehxbizIYlaP7DXsvJDPNfcVktdj9c2Gr2PPj9+tyji0MYpw0jiO+0y+WTLumJu4PkFsWID0E6x6GIQJVUJ0foYTZ6/jiB3Nv2zBUDAShV6/UxBkNa5MefFRLIyN0KbJ+3O+HEGABjVtrS32/V7Gyu4uP2KGtvsFSYfldhfKkm2E7uXadkwRMmvUyEeETwDi5jmSYYHLm8jbn4fZxALe2qbWWUnGufKVhwfNbBsvEXDVfj/jhKJAlUXnkih243K8eEikxvnNiw== Received: from YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM (2603:10b6:c00:19::29) by QB1PR01MB3122.CANPRD01.PROD.OUTLOOK.COM (2603:10b6:c00:33::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3977.32; Fri, 2 Apr 2021 00:07:48 +0000 Received: from YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM ([fe80::1c05:585a:132a:f08e]) by YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM ([fe80::1c05:585a:132a:f08e%4]) with mapi id 15.20.3977.033; Fri, 2 Apr 2021 00:07:48 +0000 From: Rick Macklem To: "Youssef GHORBAL" , Jason Breitman CC: "freebsd-net@freebsd.org" , "tuexen@FreeBSD.org" Subject: Re: NFS Mount Hangs Thread-Topic: NFS Mount Hangs Thread-Index: AQHXG1G2D7AHBwtmAkS1jBAqNNo2I6qMDIgAgALy8kyACNDugIAAsfOAgAfoFLc= Date: Fri, 2 Apr 2021 00:07:48 +0000 Message-ID: References: <3750001D-3F1C-4D9A-A9D9-98BCA6CA65A4@tildenparkcapital.com> <33693DE3-7FF8-4FAB-9A75-75576B88A566@tildenparkcapital.com> , <8E745920-1092-4312-B251-B49D11FE8028@pasteur.fr> In-Reply-To: <8E745920-1092-4312-B251-B49D11FE8028@pasteur.fr> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 9ff34b0f-3bd2-4625-8d87-08d8f56b5999 x-ms-traffictypediagnostic: QB1PR01MB3122: x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:240; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: sNsCU/pbKn/gPMI01ES3nuhJbcEoSyATBncmhjwSd3BANRleEUOWRqric5lvoCF8RoJSsPDzs//2JK7ZiHrihYOIcO7YorupeennaskqiuGIu9e2IDIFS0sGPqArw8PJ//PkBI0Ug4U1CTJwwVJ7I7J7pJlJyw5iKu/oJNIjinTcueu3flh9ZpFITTpFS9itGAN0V3I9yPeO2WhwebWUohknPcPKV7tzH5Y5PGwlk/CJipW87iqOVtDIliU0VuvUP6zhW2VsNfrxCxM3071ltYgR8Kt9cxZVCqAwwLwTzOwwF1iFSR7jwhYCc4oJDuySU9e8adUKo59+trjDx8NakmP3Z1e8itAFbw3wekt2lIMwJ76UYtuSbN0lAJukjsWIiC1ENBPXVzpmXfk4oPSonKy/mep0zoR5gSkRYRhrQHVrd+7Qyo6B1a18/f8zp5G8Ei2d3qJYdWvHJ/qhqrObL84/BdvJ8SOkBNc5tJe6rjKUUgtjRtQwe//KFxkZTYa1+vI05TPMM/7IxhzR9ue+VXs5RBHAxJbS+AiNtYvU5QtdFuFy239VKrW6YCWdXR+x0xegykoGT3hBgkj6Og4YPvcJOxDd6qknW3tbwNS8YwmKL6QbM9TY7AFOWqfvNYriOSDui578+XZLUrb+gGDBSSxPY5AQ8dEFo1//tuCcvYQyyimIj6iQY2ZTvt0EXa/YVgMgnOJiA6NemQaIADeH4Q== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM; PTR:; CAT:NONE; SFS:(376002)(396003)(39860400002)(346002)(366004)(136003)(66446008)(33656002)(66946007)(966005)(7696005)(66476007)(53546011)(110136005)(38100700001)(52536014)(64756008)(76116006)(2906002)(3480700007)(71200400001)(186003)(30864003)(91956017)(6506007)(5660300002)(8936002)(4326008)(8676002)(83380400001)(316002)(786003)(54906003)(66556008)(55016002)(86362001)(7116003)(9686003)(478600001)(66574015); DIR:OUT; SFP:1101; x-ms-exchange-antispam-messagedata: =?Windows-1252?Q?HQGhKukp1kykndoM+lJdIhkez1CYR5C+byYIXeOt11o4wmBM6UHK+Arw?= =?Windows-1252?Q?5q3LOw/utk6stRlSnDX16rajZEs0PybUFVLUJp/RTjMr0QASVtBtQms7?= =?Windows-1252?Q?/qAu8rCZE/KKMVINlGhFnjashOfFSQEN4M6Bf2JmghKzqMQaIIwHLXHn?= =?Windows-1252?Q?FIaOHa3IIFQ/Ue8RWQdRqB1YUr6mtV8FDJVc1zqZe3kpUvW0d5ZCCgkA?= =?Windows-1252?Q?7vwFYg64XnNoGAFlaXOZdY5lSaIzcunCLb7Bf64+qTmasS/seKMkllug?= =?Windows-1252?Q?L8R2Kheq0ATzheZoY5rlXJ4hy+iqcF9/TJyFCkRwy8gA14omHpRb1fBd?= =?Windows-1252?Q?hrDp5pTWRnWELWjPRRQW5b+teVrpCBMurBR4Svts2b0z7Z6eyqV6qgfZ?= =?Windows-1252?Q?tVZzEyCIAbcHqdYm+MDcNc2ek0/bDfL6vckqwHnJYmOOGf9xcESsoKhf?= =?Windows-1252?Q?6sAyHRQntlMNXWDn48xz+vXwqSEyNZebi2F9NpV8V1Gk3fYQl/dDQ0KF?= =?Windows-1252?Q?nSG4kVO9FZ0S+1WYNAccnLX65OTs8YxFR9II2rirxSvaisDEvUOlQJep?= =?Windows-1252?Q?ZSTN6LBYydEB5RV+kebnDDQO1pzDsAgnhRsoTK5Hs8l/uBrYbwM9pST7?= =?Windows-1252?Q?EB7hO00JDJ8T/EdfpkVQlWb9AU0QFoVzn+HIBlH7htMwr7KclxWkXwmw?= =?Windows-1252?Q?KznDISbZ094ZGSDZpVvA3hsaEw7sNvv5sv8jZnqBZhpnbMltkLDkygO8?= =?Windows-1252?Q?ylPuI5c9S4/fO1rdg2D61nT6MEmxIGR0niC7Hu7NV1aKziOFX5RNYiwl?= =?Windows-1252?Q?jzOFdcR/fpSRJwwgWlCxSrTOLFYuqF2OgK0Unl0O9GmR2MyRt2qA0Ld2?= =?Windows-1252?Q?m8+/i9CvVYEs8iSJTcO8AkdH+RD3VuX6dhm+goU/x0RCyN7o2fOMq3tK?= =?Windows-1252?Q?xmSDy6ZEJ2eMp7wTtGyJBkhqVsXW/6VCBdp88UbEdkejtmsUc5VmS8CF?= =?Windows-1252?Q?J5B9Qyiaw/AQSFZo6nCIzNcTwhCAuPu40BtIn5PjzAbTy+zP0EQpokMx?= =?Windows-1252?Q?npIHLvwlc148As8a2tFJRoHz2U5JLdeQNUnwRcsa/DhMw7RjH/z/E2Mh?= =?Windows-1252?Q?nn7amMN21VNN2tgSblHFEYE7Wj8RC9hKmLtPXNpGetqI541Mb/KJkeHu?= =?Windows-1252?Q?axwXknQawxK/LRind+vtZ7COiz2YBHkzIJuWy5YZmQ9SX09FhQcKXMSi?= =?Windows-1252?Q?usJnieSUCNrz1XliaimsTwe+Z4F0XIyRavf/gI5PqgmyQRHy8Bo46DaE?= =?Windows-1252?Q?8CR2pvz03rxUJb3aIcb0SFibxtGgn8BLHYcoK2R6OadZgR3KkcB6ptfc?= =?Windows-1252?Q?el4tQJ83Q6vXfCueTRePGVIzYhxhs7CjrIhXeuHZGmXSt2uu11sh3AAb?= =?Windows-1252?Q?S2Ppw0AauJlZQXakU8ftd5qCQ9r1g6FAQEutRFLS/6Nuhw9qHtbfe8XZ?= =?Windows-1252?Q?1KXs95Gs?= x-ms-exchange-transport-forked: True Content-Type: text/plain; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: uoguelph.ca X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM X-MS-Exchange-CrossTenant-Network-Message-Id: 9ff34b0f-3bd2-4625-8d87-08d8f56b5999 X-MS-Exchange-CrossTenant-originalarrivaltime: 02 Apr 2021 00:07:48.5417 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: be62a12b-2cad-49a1-a5fa-85f4f3156a7d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: +7gDBCkWMxAFZ/9x8RE+//nAGctROIG3+vMQ1scCg4qgpT4rl7nLGkRcPsHBQLEZCsG7JloTcqLgKiUaDJaQgw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: QB1PR01MB3122 X-Rspamd-Queue-Id: 4FBL3X2yk8z3HdW X-Spamd-Bar: ----- Authentication-Results: mx1.freebsd.org; dkim=pass header.d=uoguelph.ca header.s=selector1 header.b=Ot22348o; arc=pass (microsoft.com:s=arcselector9901:i=1); dmarc=pass (policy=none) header.from=uoguelph.ca; spf=pass (mx1.freebsd.org: domain of rmacklem@uoguelph.ca designates 2a01:111:f400:fe5d::631 as permitted sender) smtp.mailfrom=rmacklem@uoguelph.ca X-Spamd-Result: default: False [-6.00 / 15.00]; TO_DN_EQ_ADDR_SOME(0.00)[]; RBL_DBL_DONT_QUERY_IPS(0.00)[2a01:111:f400:fe5d::631:from]; R_DKIM_ALLOW(-0.20)[uoguelph.ca:s=selector1]; FREEFALL_USER(0.00)[rmacklem]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[4]; TO_DN_SOME(0.00)[]; R_SPF_ALLOW(-0.20)[+ip6:2a01:111:f400::/48]; MIME_GOOD(-0.10)[text/plain]; NEURAL_HAM_LONG(-1.00)[-1.000]; SPAMHAUS_ZRD(0.00)[2a01:111:f400:fe5d::631:from:127.0.2.255]; RCVD_COUNT_THREE(0.00)[3]; TO_MATCH_ENVRCPT_SOME(0.00)[]; DWL_DNSWL_LOW(-1.00)[uoguelph.ca:dkim]; DKIM_TRACE(0.00)[uoguelph.ca:+]; DMARC_POLICY_ALLOW(-0.50)[uoguelph.ca,none]; NEURAL_HAM_SHORT(-1.00)[-1.000]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+]; RCVD_TLS_LAST(0.00)[]; ASN(0.00)[asn:8075, ipnet:2a01:111:f000::/36, country:US]; ARC_ALLOW(-1.00)[microsoft.com:s=arcselector9901:i=1]; MAILMAN_DEST(0.00)[freebsd-net] X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 02 Apr 2021 00:07:53 -0000 I hope you don't mind a top post...=0A= I've been testing network partitioning between the only Linux client=0A= I have (5.2 kernel) and a FreeBSD server with the xprtdied.patch=0A= (does soshutdown(..SHUT_WR) when it knows the socket is broken)=0A= applied to it.=0A= =0A= I'm not enough of a TCP guy to know if this is useful, but here's what=0A= I see...=0A= =0A= While partitioned:=0A= On the FreeBSD server end, the socket either goes to CLOSED during=0A= the network partition or stays ESTABLISHED.=0A= On the Linux end, the socket seems to remain ESTABLISHED for a=0A= little while, and then disappears.=0A= =0A= After unpartitioning:=0A= On the FreeBSD server end, you get another socket showing up at=0A= the same port#=0A= Active Internet connections (including servers)=0A= Proto Recv-Q Send-Q Local Address Foreign Address (state) = =0A= tcp4 0 0 nfsv4-new3.nfsd nfsv4-linux.678 ESTABLISH= ED=0A= tcp4 0 0 nfsv4-new3.nfsd nfsv4-linux.678 CLOSED = =0A= =0A= The Linux client shows the same connection ESTABLISHED.=0A= (The mount sometimes reports an error. I haven't looked at packet=0A= traces to see if it retries RPCs or why the errors occur.)=0A= --> However I never get hangs.=0A= Sometimes it goes to SYN_SENT for a while and the FreeBSD server=0A= shows FIN_WAIT_1, but then both ends go to ESTABLISHED and the=0A= mount starts working again.=0A= =0A= The most obvious thing is that the Linux client always keeps using=0A= the same port#. (The FreeBSD client will use a different port# when=0A= it does a TCP reconnect after no response from the NFS server for=0A= a little while.)=0A= =0A= What do those TCP conversant think?=0A= =0A= rick=0A= ps: I can capture packets while doing this, if anyone has a use=0A= for them.=0A= =0A= =0A= =0A= =0A= =0A= =0A= ________________________________________=0A= From: owner-freebsd-net@freebsd.org on beha= lf of Youssef GHORBAL =0A= Sent: Saturday, March 27, 2021 6:57 PM=0A= To: Jason Breitman=0A= Cc: Rick Macklem; freebsd-net@freebsd.org=0A= Subject: Re: NFS Mount Hangs=0A= =0A= CAUTION: This email originated from outside of the University of Guelph. Do= not click links or open attachments unless you recognize the sender and kn= ow the content is safe. If in doubt, forward suspicious emails to IThelp@uo= guelph.ca=0A= =0A= =0A= =0A= =0A= On 27 Mar 2021, at 13:20, Jason Breitman > wrote:=0A= =0A= The issue happened again so we can say that disabling TSO and LRO on the NI= C did not resolve this issue.=0A= # ifconfig lagg0 -rxcsum -rxcsum6 -txcsum -txcsum6 -lro -tso -vlanhwtso=0A= # ifconfig lagg0=0A= lagg0: flags=3D8943 metric = 0 mtu 1500=0A= options=3D8100b8=0A= =0A= We can also say that the sysctl settings did not resolve this issue.=0A= =0A= # sysctl net.inet.tcp.fast_finwait2_recycle=3D1=0A= net.inet.tcp.fast_finwait2_recycle: 0 -> 1=0A= =0A= # sysctl net.inet.tcp.finwait2_timeout=3D1000=0A= net.inet.tcp.finwait2_timeout: 60000 -> 1000=0A= =0A= I don=92t think those will do anything in your case since the FIN_WAIT2 are= on the client side and those sysctls are for BSD.=0A= By the way it seems that Linux recycles automatically TCP sessions in FIN_W= AIT2 after 60 seconds (sysctl net.ipv4.tcp_fin_timeout)=0A= =0A= tcp_fin_timeout (integer; default: 60; since Linux 2.2)=0A= This specifies how many seconds to wait for a final FIN=0A= packet before the socket is forcibly closed. This is=0A= strictly a violation of the TCP specification, but=0A= required to prevent denial-of-service attacks. In Linux=0A= 2.2, the default value was 180.=0A= =0A= So I don=92t get why it stucks in the FIN_WAIT2 state anyway.=0A= =0A= You really need to have a packet capture during the outage (client and serv= er side) so you=92ll get over the wire chat and start speculating from ther= e.=0A= No need to capture the beginning of the outage for now. All you have to do,= is run a tcpdump for 10 minutes or so when you notice a client stuck.=0A= =0A= * I have not rebooted the NFS Server nor have I restarted nfsd, but do not = believe that is required as these settings are at the TCP level and I would= expect new sessions to use the updated settings.=0A= =0A= The issue occurred after 5 days following a reboot of the client machines.= =0A= I ran the capture information again to make use of the situation.=0A= =0A= #!/bin/sh=0A= =0A= while true=0A= do=0A= /bin/date >> /tmp/nfs-hang.log=0A= /bin/ps axHl | grep nfsd | grep -v grep >> /tmp/nfs-hang.log=0A= /usr/bin/procstat -kk 2947 >> /tmp/nfs-hang.log=0A= /usr/bin/procstat -kk 2944 >> /tmp/nfs-hang.log=0A= /bin/sleep 60=0A= done=0A= =0A= =0A= On the NFS Server=0A= Active Internet connections (including servers)=0A= Proto Recv-Q Send-Q Local Address Foreign Address (state)= =0A= tcp4 0 0 NFS.Server.IP.X.2049 NFS.Client.IP.X.48286 CLO= SE_WAIT=0A= =0A= On the NFS Client=0A= tcp 0 0 NFS.Client.IP.X:48286 NFS.Server.IP.X:2049 F= IN_WAIT2=0A= =0A= =0A= =0A= You had also asked for the output below.=0A= =0A= # nfsstat -E -s=0A= BackChannelCtBindConnToSes=0A= 0 0=0A= =0A= # sysctl vfs.nfsd.request_space_throttle_count=0A= vfs.nfsd.request_space_throttle_count: 0=0A= =0A= I see that you are testing a patch and I look forward to seeing the results= .=0A= =0A= =0A= Jason Breitman=0A= =0A= =0A= On Mar 21, 2021, at 6:21 PM, Rick Macklem > wrote:=0A= =0A= Youssef GHORBAL > wrote:=0A= >Hi Jason,=0A= >=0A= >> On 17 Mar 2021, at 18:17, Jason Breitman > wrote:=0A= >>=0A= >> Please review the details below and let me know if there is a setting th= at I should apply to my FreeBSD NFS Server or if there is a bug fix that I = can apply to resolve my issue.=0A= >> I shared this information with the linux-nfs mailing list and they belie= ve the issue is on the server side.=0A= >>=0A= >> Issue=0A= >> NFSv4 mounts periodically hang on the NFS Client.=0A= >>=0A= >> During this time, it is possible to manually mount from another NFS Serv= er on the NFS Client having issues.=0A= >> Also, other NFS Clients are successfully mounting from the NFS Server in= question.=0A= >> Rebooting the NFS Client appears to be the only solution.=0A= >=0A= >I had experienced a similar weird situation with periodically stuck Linux = NFS clients >mounting Isilon NFS servers (Isilon is FreeBSD based but they = seem to have there >own nfsd)=0A= Yes, my understanding is that Isilon uses a proprietary user space nfsd and= =0A= not the kernel based RPC and nfsd in FreeBSD.=0A= =0A= >We=92ve had better luck and we did manage to have packet captures on both = sides >during the issue. The gist of it goes like follows:=0A= >=0A= >- Data flows correctly between SERVER and the CLIENT=0A= >- At some point SERVER starts decreasing it's TCP Receive Window until it = reachs 0=0A= >- The client (eager to send data) can only ack data sent by SERVER.=0A= >- When SERVER was done sending data, the client starts sending TCP Window = >Probes hoping that the TCP Window opens again so he can flush its buffers.= =0A= >- SERVER responds with a TCP Zero Window to those probes.=0A= Having the window size drop to zero is not necessarily incorrect.=0A= If the server is overloaded (has a backlog of NFS requests), it can stop do= ing=0A= soreceive() on the socket (so the socket rcv buffer can fill up and the TCP= window=0A= closes). This results in "backpressure" to stop the NFS client from floodin= g the=0A= NFS server with requests.=0A= --> However, once the backlog is handled, the nfsd should start to soreceiv= e()=0A= again and this shouls cause the window to open back up.=0A= --> Maybe this is broken in the socket/TCP code. I quickly got lost in=0A= tcp_output() when it decides what to do about the rcvwin.=0A= =0A= >- After 6 minutes (the NFS server default Idle timeout) SERVER racefully c= loses the >TCP connection sending a FIN Packet (and still a TCP Window 0)= =0A= This probably does not happen for Jason's case, since the 6minute timeout= =0A= is disabled when the TCP connection is assigned as a backchannel (most like= ly=0A= the case for NFSv4.1).=0A= =0A= >- CLIENT ACK that FIN.=0A= >- SERVER goes in FIN_WAIT_2 state=0A= >- CLIENT closes its half part part of the socket and goes in LAST_ACK stat= e.=0A= >- FIN is never sent by the client since there still data in its SendQ and = receiver TCP >Window is still 0. At this stage the client starts sending TC= P Window Probes again >and again hoping that the server opens its TCP Windo= w so it can flush it's buffers >and terminate its side of the socket.=0A= >- SERVER keeps responding with a TCP Zero Window to those probes.=0A= >=3D> The last two steps goes on and on for hours/days freezing the NFS mou= nt bound >to that TCP session.=0A= >=0A= >If we had a situation where CLIENT was responsible for closing the TCP Win= dow (and >initiating the TCP FIN first) and server wanting to send data we= =92ll end up in the same >state as you I think.=0A= >=0A= >We=92ve never had the root cause of why the SERVER decided to close the TC= P >Window and no more acccept data, the fix on the Isilon part was to recyc= le more >aggressively the FIN_WAIT_2 sockets (net.inet.tcp.fast_finwait2_re= cycle=3D1 & >net.inet.tcp.finwait2_timeout=3D5000). Once the socket recycle= d and at the next >occurence of CLIENT TCP Window probe, SERVER sends a RST= , triggering the >teardown of the session on the client side, a new TCP han= dchake, etc and traffic >flows again (NFS starts responding)=0A= >=0A= >To avoid rebooting the client (and before the aggressive FIN_WAIT_2 was >i= mplemented on the Isilon side) we=92ve added a check script on the client t= hat detects >LAST_ACK sockets on the client and through iptables rule enfor= ces a TCP RST, >Something like: -A OUTPUT -p tcp -d $nfs_server_addr --spor= t $local_port -j REJECT >--reject-with tcp-reset (the script removes this i= ptables rule as soon as the LAST_ACK >disappears)=0A= >=0A= >The bottom line would be to have a packet capture during the outage (clien= t and/or >server side), it will show you at least the shape of the TCP exch= ange when NFS is >stuck.=0A= Interesting story and good work w.r.t. sluething, Youssef, thanks.=0A= =0A= I looked at Jason's log and it shows everything is ok w.r.t the nfsd thread= s.=0A= (They're just waiting for RPC requests.)=0A= However, I do now think I know why the soclose() does not happen.=0A= When the TCP connection is assigned as a backchannel, that takes a referenc= e=0A= cnt on the structure. This refcnt won't be released until the connection is= =0A= replaced by a BindConnectiotoSession operation from the client. But that wo= n't=0A= happen until the client creates a new TCP connection.=0A= --> No refcnt release-->no refcnt of 0-->no soclose().=0A= =0A= I've created the attached patch (completely different from the previous one= )=0A= that adds soshutdown(SHUT_WR) calls in the three places where the TCP=0A= connection is going away. This seems to get it past CLOSE_WAIT without a=0A= soclose().=0A= --> I know you are not comfortable with patching your server, but I do thin= k=0A= this change will get the socket shutdown to complete.=0A= =0A= There are a couple more things you can check on the server...=0A= # nfsstat -E -s=0A= --> Look for the count under "BindConnToSes".=0A= --> If non-zero, backchannels have been assigned=0A= # sysctl -a | fgrep request_space_throttle_count=0A= --> If non-zero, the server has been overloaded at some point.=0A= =0A= I think the attached patch might work around the problem.=0A= The code that should open up the receive window needs to be checked.=0A= I am also looking at enabling the 6minute timeout when a backchannel is=0A= assigned.=0A= =0A= rick=0A= =0A= Youssef=0A= =0A= _______________________________________________=0A= freebsd-net@freebsd.org mailing list=0A= https://urldefense.com/v3/__https://lists.freebsd.org/mailman/listinfo/free= bsd-net__;!!JFdNOqOXpB6UZW0!_c2MFNbir59GXudWPVdE5bNBm-qqjXeBuJ2UEmFv5OZciLj= 4ObR_drJNv5yryaERfIbhKR2d$=0A= To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"=0A= =0A= =0A= =0A= =0A= _______________________________________________=0A= freebsd-net@freebsd.org mailing list=0A= https://lists.freebsd.org/mailman/listinfo/freebsd-net=0A= To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"=0A=