From owner-freebsd-net@freebsd.org Sun Apr 4 15:27:24 2021 Return-Path: Delivered-To: freebsd-net@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 617F35A9FEC for ; Sun, 4 Apr 2021 15:27:24 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from CAN01-QB1-obe.outbound.protection.outlook.com (mail-qb1can01on061b.outbound.protection.outlook.com [IPv6:2a01:111:f400:fe5c::61b]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mail.protection.outlook.com", Issuer "DigiCert Cloud Services CA-1" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4FCyMc0zqnz3LLq; Sun, 4 Apr 2021 15:27:23 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=kyTVsNTv7P2dmwpD6lOVMQEiw90maJYLI/w15YPYPLn8PsLeXtgBCYDshwtocLwVZuNq5/K4S6rV1Y02Ktzxr5wOoQO7O6NOQ7n+RCCorc63zw2xhX4MZQbKPHBryZBgWKiuRniQdaunVDIB+VFBKPOhKQVEEtpshd8IETPVQca3VuKE1ytQ9AZy5UDiivMuEQ2aVSwl8E2scS0HwDdtfFmsSPIuthWV1IpBUJPtDamaN4cKgA5ignZ4VBZ64uzRaXXxyLkhn5wLx7t/Y99SUQGavy1lzNvRkwivH9GnfPDvs90GUmijvl9LsM31QRCyDKaF26yauS5OkcFRR7Z6KQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=hrQxUnfspw5wJut43aEeMtjpQKhbgAAi+3JZ8VV6aOs=; b=kL1JSPlOgI8IQq+ZHe2NZ1QYtXieihEAxWMxgL1r4KNUdE+3j8biCYXoy1N3K19TWEQPrPuRtz0/1ka29MMx0maUxRYpCscYQLS6+n95YyvRNIRbq1jPw065gH6llcZ0oBLlfuRb0kqMhatXrcPWOtcYCWSfS36walm72KZJXqWBYNtxVust05tsl3YjUyQU22O7QltkhRIpKSl4dah6raV1LfRwS0Euee+aV1ruszs5wOyWeg7POOuu7EWRv53Xa3/TbSlmHsjE4/Z8hFCwldvk7fIkFnRYWj8quimAkNYHi602NZ+mgLPTPxQKRpqK+PkBcsfZJrpjrlgVSxANVw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=uoguelph.ca; dmarc=pass action=none header.from=uoguelph.ca; dkim=pass header.d=uoguelph.ca; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=uoguelph.ca; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=hrQxUnfspw5wJut43aEeMtjpQKhbgAAi+3JZ8VV6aOs=; b=RyngLFYHSM0YxP2NOMip+Tk06HokBbgSB/uxFXE99WMCfnZUtmq1BT7c7AQxMiM+oUn2NdB21nK+WrwfhkXIEEXDezxzkZsO52y2rHbbZMhLH4GClTbkHXoE7oWKYctv8lwbo8l/rDqnrc4VP5/5jqQgZuoD3fijHodfsPvWJEyj6uzdbSl4E/qwLsP1ddsVT22/ntxuIiJa+7s0jZRirp2KZ9TRfDs2+m8sGw22xbfP31avwuyJlZmDn6Th62pchoQqPfyx5Ffy8fwipNRVGXASdlGUZtnV//aIM7z7X5/f5zDN64bZK/DW9bqKnTAkMAw422MFAt1iESXNBm7FSw== Received: from YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM (2603:10b6:c00:19::29) by QB1PR01MB3122.CANPRD01.PROD.OUTLOOK.COM (2603:10b6:c00:33::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3999.32; Sun, 4 Apr 2021 15:27:20 +0000 Received: from YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM ([fe80::1c05:585a:132a:f08e]) by YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM ([fe80::1c05:585a:132a:f08e%4]) with mapi id 15.20.3999.032; Sun, 4 Apr 2021 15:27:15 +0000 From: Rick Macklem To: "Scheffenegger, Richard" , "tuexen@freebsd.org" CC: Youssef GHORBAL , "freebsd-net@freebsd.org" Subject: Re: NFS Mount Hangs Thread-Topic: NFS Mount Hangs Thread-Index: AQHXG1G2D7AHBwtmAkS1jBAqNNo2I6qMDIgAgALy8kyACNDugIAAsfOAgAfoFLeAARWpAIAAUOsEgAKJ2oCAADW73Q== Date: Sun, 4 Apr 2021 15:27:15 +0000 Message-ID: References: <3750001D-3F1C-4D9A-A9D9-98BCA6CA65A4@tildenparkcapital.com> <33693DE3-7FF8-4FAB-9A75-75576B88A566@tildenparkcapital.com> <8E745920-1092-4312-B251-B49D11FE8028@pasteur.fr> , , , In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 99b9dc4f-60ab-4ab7-6382-08d8f77e20b5 x-ms-traffictypediagnostic: QB1PR01MB3122: x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:226; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: vPtgfvw6lLpZZbNevE34RQrjrb0aYgw79gzT2dPGeYoQFFt/v2zxdRLGuM+fmdDwmdbdjtxe97VFGhmmcZg5n+c6ZwzOuWV7VivdN9JpS8t8UrWNfvn/XsMjA9m8b9c0KRoKtdTyCI9KTe0PxIeGkETfLVvh8GzF75MFuMe/WoyB/cxHktJ5AqMvqGrStR49wx7rWeFKq46wTGQ0SLE4iCyZHFcZ+uios0B3LlQE8e6jsFCnKCc1jdEkMb4/DJVr8Cbj6+Io7DuB7o2/hxeWhtg9gPjvqxjDMoW6k/JMgAs99HXNmmQaN3ZDxQtaP1iA3f02jBXuTtzOOiPA0IoTkRTczlAiyGmWZCDtJMKx5arG7c94tkvqPnbyZmm7Kfz7HxR4QCskv59ag/TykaTtPcgk+afDTkkViKMFAMFyV31LwoJbfyBkqSpvXeybJAEs2/D4je3hGkish3YPC+4Zzk6hWyijaOH+KfmADFVWmAbD7JqlIUWNjYBwGDlcXFZ55HrY42vhY50CKxTcwzWIEoa1By/dGKGaIMAHlf4H/sPb1Yr5Ccn/wF5P0EyyLRQNfm7jNOJVoVomqr7znYq0sfV+oMiD4BTBoxSe9QRedC3YxPU6byO/SriRs5DTIgt2oJu7eDGjG2rU2zAgT3s6sIUXHJwKG8mYq2Z2ZMht11UfOnGUe2/d5CSxqkcqWD28xVlMWqtQVbGjNqf8C0ij4iuSZsRpbr41f0uu3P9v8Jk= x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM; PTR:; CAT:NONE; SFS:(39850400004)(376002)(396003)(136003)(366004)(346002)(54906003)(966005)(8936002)(186003)(7116003)(9686003)(110136005)(8676002)(7696005)(71200400001)(38100700001)(52536014)(478600001)(33656002)(66574015)(66946007)(53546011)(55016002)(66556008)(6506007)(786003)(86362001)(91956017)(30864003)(66446008)(76116006)(64756008)(4326008)(5660300002)(2906002)(3480700007)(316002)(83380400001)(66476007)(579004)(559001); DIR:OUT; SFP:1101; x-ms-exchange-antispam-messagedata: =?Windows-1252?Q?D02Dmpr+ktFMzdA/v5Q8E6K8xO7oWfDZbgwo++jRQQjeLvnn3dwZUS/C?= =?Windows-1252?Q?gZNceHKW6kGno6PSOg8NcDWF8MWiFKnCDcS60KByS+TubQbDuG516lmL?= =?Windows-1252?Q?Uo2PHrPUQPgj3+iN4BYY6xRUghbFprjhbDEef4UAw4LxbGmERxVIFdBU?= =?Windows-1252?Q?HHuABBZsTe3rYtIYhfkqjR9vthTttWrc7eXtS/fZ3hWoH+e3N0py88ah?= =?Windows-1252?Q?ji0ZQ9J71BsOsEu5TmHrBgs/FFKqCz9G1potFvvToguEsN/w4OMeSLfk?= =?Windows-1252?Q?QDJBa1ZofF38bOnxhtn1w1aXHECZjO2kPUkd+nuOMiQxrZSb28G0C6C/?= =?Windows-1252?Q?AVyB74lyKDfBfeQ9TGMTKWMdsfFzlRFhnhwVPyD16mQnxmYOnTTV7Td8?= =?Windows-1252?Q?vVfSEYVYVZQTX7ievLczFZFMN81HGC+Q8w6MbZGodO8dAL5CvJ8ugka0?= =?Windows-1252?Q?uiwjNvPhV4s7ilhfFVq5DcKyYvHdupJYH3tOkHX43/4ytyoiJZ0QH0je?= =?Windows-1252?Q?t5eldNlnKy6EDiuwAlZIn5+XSv63+N3sHxKlCnWXb2DWZs0ykrfs/Aa/?= =?Windows-1252?Q?U8+0FaMd3zvS8begspsR0gNHp1RQEI9r+espUL0donUgWBMKhBsv3JUq?= =?Windows-1252?Q?Q1zH5kBbP+FNnlQ13XW36z1PxdYtcKeJ4ltqTi6lUeJcTp89xHqtd//W?= =?Windows-1252?Q?vpiwLrHBEq4J2Hn5+jNsCum7JeweOpHXTn2RowdXygFiuScJJDBrazR6?= =?Windows-1252?Q?znXtEf2Xe56uZ4jgjWiEJ4/YorNqs++be+1/n/qCRYS6nqrhegu2M2sA?= =?Windows-1252?Q?52Ot9fQ0N6qnRw/A+bI5Qz84VJeiXkjR1yk8GdDWJrd3lj80ytUbapaJ?= =?Windows-1252?Q?aNHoLPFtbdg9DOE3oRNzXwXkveONXCfe78VLBGmsaPF/uxxIzwnLF5TG?= =?Windows-1252?Q?c3QQizfFki6fBblx4aYronuprbF85HVno8vNdWSGEl21aBxJTFMiPCFT?= =?Windows-1252?Q?PtgSn8CeM0ZZQ8e+kSyTH66bgqIIW9xXNoGbNQFHlQ83I0xA2QDTE7vL?= =?Windows-1252?Q?aEpn4pu+NhDyY6izAQZpKnpjOYfuksNMQE9QvjSB1fAboq0t6tOxeRPu?= =?Windows-1252?Q?Ama7+2YVcgZTy4SIoc5kkbHVHsdv8luz934yrkxSN1BGkUpo+HpTLviL?= =?Windows-1252?Q?A5NHBW4Z9YU/rJf6KegEJIgiRI7YV0B9XgKfa27dMZiPkQyK2+4CcrXh?= =?Windows-1252?Q?p3RmhB98AhYrL4ViN/eOTZfSM9tovadb6KJHOqgJGEHOE/wJ9Hjx2tZw?= =?Windows-1252?Q?RsmbwTDwkdX6FkchNLgwGPFP9H8D78z0fuAuzSmnSLpC+jfwuuAZnMfm?= =?Windows-1252?Q?7TrrtPUQqlQNHHcJEenx5RPDKcB7ete0MBn1ZBbaVGiS+uG+9mFmmFs0?= =?Windows-1252?Q?AekpKs4aKV0WdHOTcNAJJ/OruUObJDH14I/KRzDsJiA=3D?= x-ms-exchange-transport-forked: True Content-Type: text/plain; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: uoguelph.ca X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM X-MS-Exchange-CrossTenant-Network-Message-Id: 99b9dc4f-60ab-4ab7-6382-08d8f77e20b5 X-MS-Exchange-CrossTenant-originalarrivaltime: 04 Apr 2021 15:27:15.8364 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: be62a12b-2cad-49a1-a5fa-85f4f3156a7d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: VpPXlLl531Dry+BTh6o4s81YJXWGxCjro9QEE/6C3jKyPnzMgloE9Aa5lPEfOVfoG1cKgxM3jcnhoxO0HamjCw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: QB1PR01MB3122 X-Rspamd-Queue-Id: 4FCyMc0zqnz3LLq X-Spamd-Bar: ---- Authentication-Results: mx1.freebsd.org; none X-Spamd-Result: default: False [-4.00 / 15.00]; REPLY(-4.00)[] X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 04 Apr 2021 15:27:24 -0000 Well, I'm going to cheat and top post, since this is elated info. and not really part of the discussion... I've been testing network partitioning between a Linux client (5.2 kernel) and a FreeBSD-current NFS server. I have not gotten a solid hang, but I have had the Linux client doing "battle" with the FreeBSD server for several minutes after un-partitioning the connection. The battle basically consists of the Linux client sending an RST, followed by a SYN. The FreeBSD server ignores the RST and just replies with the same old ack. --> This varies from "just a SYN" that succeeds to 100+ cycles of the above over several minutes. I had thought that an RST was a "pretty heavy hammer", but FreeBSD seems pretty good at ignoring it. A full packet capture of one of these is in /home/rmacklem/linuxtofreenfs.p= cap in case anyone wants to look at it. Here's a tcpdump snippet of the interesting part (see the *** comments): 19:10:09.305775 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.apex-m= esh: Flags [P.], seq 202585:202749, ack 212293, win 29128, options [nop,nop= ,TS val 2073636037 ecr 2671204825], length 164: NFS reply xid 613153685 rep= ly ok 160 getattr NON 4 ids 0/33554432 sz 0 19:10:09.305850 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick.n= fsd: Flags [.], ack 202749, win 501, options [nop,nop,TS val 2671204825 ecr= 2073636037], length 0 *** Network is now partitioned... 19:10:09.407840 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick.n= fsd: Flags [P.], seq 212293:212525, ack 202749, win 501, options [nop,nop,T= S val 2671204927 ecr 2073636037], length 232: NFS request xid 629930901 228= getattr fh 0,1/53 19:10:09.615779 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick.n= fsd: Flags [P.], seq 212293:212525, ack 202749, win 501, options [nop,nop,T= S val 2671205135 ecr 2073636037], length 232: NFS request xid 629930901 228= getattr fh 0,1/53 19:10:09.823780 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick.n= fsd: Flags [P.], seq 212293:212525, ack 202749, win 501, options [nop,nop,T= S val 2671205343 ecr 2073636037], length 232: NFS request xid 629930901 228= getattr fh 0,1/53 *** Lots of lines snipped. 19:13:41.295783 ARP, Request who-has nfsv4-new3.home.rick tell nfsv4-linux.= home.rick, length 28 19:13:42.319767 ARP, Request who-has nfsv4-new3.home.rick tell nfsv4-linux.= home.rick, length 28 19:13:46.351966 ARP, Request who-has nfsv4-new3.home.rick tell nfsv4-linux.= home.rick, length 28 19:13:47.375790 ARP, Request who-has nfsv4-new3.home.rick tell nfsv4-linux.= home.rick, length 28 19:13:48.399786 ARP, Request who-has nfsv4-new3.home.rick tell nfsv4-linux.= home.rick, length 28 *** Network is now unpartitioned... 19:13:48.399990 ARP, Reply nfsv4-new3.home.rick is-at d4:be:d9:07:81:72 (ou= i Unknown), length 46 19:13:48.400002 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick.n= fsd: Flags [S], seq 416692300, win 64240, options [mss 1460,sackOK,TS val 2= 671421871 ecr 0,nop,wscale 7], length 0 19:13:48.400185 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.apex-m= esh: Flags [.], ack 212293, win 29127, options [nop,nop,TS val 2073855137 e= cr 2671204825], length 0 19:13:48.400273 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick.n= fsd: Flags [R], seq 964161458, win 0, length 0 19:13:49.423833 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick.n= fsd: Flags [S], seq 416692300, win 64240, options [mss 1460,sackOK,TS val 2= 671424943 ecr 0,nop,wscale 7], length 0 19:13:49.424056 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.apex-m= esh: Flags [.], ack 212293, win 29127, options [nop,nop,TS val 2073856161 e= cr 2671204825], length 0 *** This "battle" goes on for 223sec... I snipped out 13 cycles of this "Linux sends an RST, followed by SYN" "FreeBSD replies with same old ACK". In another test run I saw this cycle continue non-stop for several minutes. This time, the Linux client paused for a while (see ARPs below). 19:13:49.424101 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick.n= fsd: Flags [R], seq 964161458, win 0, length 0 19:13:53.455867 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick.n= fsd: Flags [S], seq 416692300, win 64240, options [mss 1460,sackOK,TS val 2= 671428975 ecr 0,nop,wscale 7], length 0 19:13:53.455991 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.apex-m= esh: Flags [.], ack 212293, win 29127, options [nop,nop,TS val 2073860193 e= cr 2671204825], length 0 *** Snipped a bunch of stuff out, mostly ARPs, plus one more RST. 19:16:57.775780 ARP, Request who-has nfsv4-new3.home.rick tell nfsv4-linux.= home.rick, length 28 19:16:57.775937 ARP, Reply nfsv4-new3.home.rick is-at d4:be:d9:07:81:72 (ou= i Unknown), length 46 19:16:57.980240 ARP, Request who-has nfsv4-new3.home.rick tell 192.168.1.25= 4, length 46 19:16:58.555663 ARP, Request who-has nfsv4-new3.home.rick tell 192.168.1.25= 4, length 46 19:17:00.104701 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.apex-m= esh: Flags [F.], seq 202749, ack 212293, win 29128, options [nop,nop,TS val= 2074046846 ecr 2671204825], length 0 19:17:15.664354 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.apex-m= esh: Flags [F.], seq 202749, ack 212293, win 29128, options [nop,nop,TS val= 2074062406 ecr 2671204825], length 0 19:17:31.239246 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.apex-m= esh: Flags [R.], seq 202750, ack 212293, win 0, options [nop,nop,TS val 207= 4077981 ecr 2671204825], length 0 *** FreeBSD finally acknowledges the RST 38sec after Linux sent the last of 13 (100+ for another test run). 19:17:51.535979 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick.n= fsd: Flags [S], seq 4247692373, win 64240, options [mss 1460,sackOK,TS val = 2671667055 ecr 0,nop,wscale 7], length 0 19:17:51.536130 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.apex-m= esh: Flags [S.], seq 661237469, ack 4247692374, win 65535, options [mss 146= 0,nop,wscale 6,sackOK,TS val 2074098278 ecr 2671667055], length 0 *** Now back in business... 19:17:51.536218 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick.n= fsd: Flags [.], ack 1, win 502, options [nop,nop,TS val 2671667055 ecr 2074= 098278], length 0 19:17:51.536295 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick.n= fsd: Flags [P.], seq 1:233, ack 1, win 502, options [nop,nop,TS val 2671667= 056 ecr 2074098278], length 232: NFS request xid 629930901 228 getattr fh 0= ,1/53 19:17:51.536346 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick.n= fsd: Flags [P.], seq 233:505, ack 1, win 502, options [nop,nop,TS val 26716= 67056 ecr 2074098278], length 272: NFS request xid 697039765 132 getattr fh= 0,1/53 19:17:51.536515 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.apex-m= esh: Flags [.], ack 505, win 29128, options [nop,nop,TS val 2074098279 ecr = 2671667056], length 0 19:17:51.536553 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick.n= fsd: Flags [P.], seq 505:641, ack 1, win 502, options [nop,nop,TS val 26716= 67056 ecr 2074098279], length 136: NFS request xid 730594197 132 getattr fh= 0,1/53 19:17:51.536562 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.apex-m= esh: Flags [P.], seq 1:49, ack 505, win 29128, options [nop,nop,TS val 2074= 098279 ecr 2671667056], length 48: NFS reply xid 697039765 reply ok 44 geta= ttr ERROR: unk 10063 This error 10063 after the partition heals is also "bad news". It indicates= the Session (which is supposed to maintain "exactly once" RPC semantics is broken). I'l= l admit I suspect a Linux client bug, but will be investigating further. So, hopefully TCP conversant folk can confirm if the above is correct behav= iour or if the RST should be ack'd sooner? I could also see this becoming a "forever" TCP battle for other versions of= Linux client. rick ________________________________________ From: Scheffenegger, Richard Sent: Sunday, April 4, 2021 7:50 AM To: Rick Macklem; tuexen@freebsd.org Cc: Youssef GHORBAL; freebsd-net@freebsd.org Subject: Re: NFS Mount Hangs CAUTION: This email originated from outside of the University of Guelph. Do= not click links or open attachments unless you recognize the sender and kn= ow the content is safe. If in doubt, forward suspicious emails to IThelp@uo= guelph.ca For what it=91s worth, suse found two bugs in the linux nfconntrack (statef= ul firewall), and pfifo-fast scheduler, which could conspire to make tcp se= ssions hang forever. One is a missed updaten when the c=F6ient is not using the noresvport moint= option, which makes tje firewall think rsts are illegal (and drop them); The fast scheduler can run into an issue if only a single packet should be = forwarded (note that this is not the default scheduler, but often recommend= ed for perf, as it runs lockless and lower cpu cost that pfq (default). If = no other/additional packet pushes out that last packet of a flow, it can be= come stuck forever... I can try getting the relevant bug info next week... ________________________________ Von: owner-freebsd-net@freebsd.org im Auftr= ag von Rick Macklem Gesendet: Friday, April 2, 2021 11:31:01 PM An: tuexen@freebsd.org Cc: Youssef GHORBAL ; freebsd-net@freebsd.org <= freebsd-net@freebsd.org> Betreff: Re: NFS Mount Hangs NetApp Security WARNING: This is an external email. Do not click links or o= pen attachments unless you recognize the sender and know the content is saf= e. tuexen@freebsd.org wrote: >> On 2. Apr 2021, at 02:07, Rick Macklem wrote: >> >> I hope you don't mind a top post... >> I've been testing network partitioning between the only Linux client >> I have (5.2 kernel) and a FreeBSD server with the xprtdied.patch >> (does soshutdown(..SHUT_WR) when it knows the socket is broken) >> applied to it. >> >> I'm not enough of a TCP guy to know if this is useful, but here's what >> I see... >> >> While partitioned: >> On the FreeBSD server end, the socket either goes to CLOSED during >> the network partition or stays ESTABLISHED. >If it goes to CLOSED you called shutdown(, SHUT_WR) and the peer also >sent a FIN, but you never called close() on the socket. >If the socket stays in ESTABLISHED, there is no communication ongoing, >I guess, and therefore the server does not even detect that the peer >is not reachable. >> On the Linux end, the socket seems to remain ESTABLISHED for a >> little while, and then disappears. >So how does Linux detect the peer is not reachable? Well, here's what I see in a packet capture in the Linux client once I partition it (just unplug the net cable): - lots of retransmits of the same segment (with ACK) for 54sec - then only ARP queries Once I plug the net cable back in: - ARP works - one more retransmit of the same segement - receives RST from FreeBSD ** So, is this now a "new" TCP connection, despite using the same port#. --> It matters for NFS, since "new connection" implies "must retry all outstanding RPCs". - sends SYN - receives SYN, ACK from FreeBSD --> connection starts working again Always uses same port#. On the FreeBSD server end: - receives the last retransmit of the segment (with ACK) - sends RST - receives SYN - sends SYN, ACK I thought that there was no RST in the capture I looked at yesterday, so I'm not sure if FreeBSD always sends an RST, but the Linux client behaviour was the same. (Sent a SYN, etc). The socket disappears from the Linux "netstat -a" and I suspect that happens after about 54sec, but I am not sure about the timing. >> >> After unpartitioning: >> On the FreeBSD server end, you get another socket showing up at >> the same port# >> Active Internet connections (including servers) >> Proto Recv-Q Send-Q Local Address Foreign Address (state= ) >> tcp4 0 0 nfsv4-new3.nfsd nfsv4-linux.678 ESTABL= ISHED >> tcp4 0 0 nfsv4-new3.nfsd nfsv4-linux.678 CLOSED >> >> The Linux client shows the same connection ESTABLISHED. But disappears from "netstat -a" for a while during the partitioning. >> (The mount sometimes reports an error. I haven't looked at packet >> traces to see if it retries RPCs or why the errors occur.) I have now done so, as above. >> --> However I never get hangs. >> Sometimes it goes to SYN_SENT for a while and the FreeBSD server >> shows FIN_WAIT_1, but then both ends go to ESTABLISHED and the >> mount starts working again. >> >> The most obvious thing is that the Linux client always keeps using >> the same port#. (The FreeBSD client will use a different port# when >> it does a TCP reconnect after no response from the NFS server for >> a little while.) >> >> What do those TCP conversant think? >I guess you are you are never calling close() on the socket, for with >the connection state is CLOSED. Ok, that makes sense. For this case the Linux client has not done a BindConnectionToSession to re-assign the back channel. I'll have to bug them about this. However, I'll bet they'll answer that I have to tell them the back channel needs re-assignment or something like that. I am pretty certain they are broken, in that the client needs to retry all outstanding RPCs. For others, here's the long winded version of this that I just put on the phabricator review: In the server side kernel RPC, the socket (struct socket *) is in a structure called SVCXPRT (normally pointed to by "xprt"). These structures a ref counted and the soclose() is done when the ref. cnt goes to zero. My understanding is that "struct socket *" is free'd by soclose() so this cannot be done before the xprt ref. cnt goes to zero. For NFSv4.1/4.2 there is something called a back channel which means that a "xprt" is used for server->client RPCs, although the TCP connection is established by the client to the server. --> This back channel holds a ref cnt on "xprt" until the client re-assigns it to a different TCP connection via an operation called BindConnectionToSession and the Linux client is not doing this soon enough, it appears. So, the soclose() is delayed, which is why I think the TCP connection gets stuck in CLOSE_WAIT and that is why I've added the soshutdown(..SHUT_WR) calls, which can happen before the client gets around to re-assigning the back channel. Thanks for your help with this Michael, rick Best regards Michael > > rick > ps: I can capture packets while doing this, if anyone has a use > for them. > > > > > > > ________________________________________ > From: owner-freebsd-net@freebsd.org on be= half of Youssef GHORBAL > Sent: Saturday, March 27, 2021 6:57 PM > To: Jason Breitman > Cc: Rick Macklem; freebsd-net@freebsd.org > Subject: Re: NFS Mount Hangs > > CAUTION: This email originated from outside of the University of Guelph. = Do not click links or open attachments unless you recognize the sender and = know the content is safe. If in doubt, forward suspicious emails to IThelp@= uoguelph.ca > > > > > On 27 Mar 2021, at 13:20, Jason Breitman > wrote: > > The issue happened again so we can say that disabling TSO and LRO on the = NIC did not resolve this issue. > # ifconfig lagg0 -rxcsum -rxcsum6 -txcsum -txcsum6 -lro -tso -vlanhwtso > # ifconfig lagg0 > lagg0: flags=3D8943 metri= c 0 mtu 1500 > options=3D8100b8 > > We can also say that the sysctl settings did not resolve this issue. > > # sysctl net.inet.tcp.fast_finwait2_recycle=3D1 > net.inet.tcp.fast_finwait2_recycle: 0 -> 1 > > # sysctl net.inet.tcp.finwait2_timeout=3D1000 > net.inet.tcp.finwait2_timeout: 60000 -> 1000 > > I don=92t think those will do anything in your case since the FIN_WAIT2 a= re on the client side and those sysctls are for BSD. > By the way it seems that Linux recycles automatically TCP sessions in FIN= _WAIT2 after 60 seconds (sysctl net.ipv4.tcp_fin_timeout) > > tcp_fin_timeout (integer; default: 60; since Linux 2.2) > This specifies how many seconds to wait for a final FIN > packet before the socket is forcibly closed. This is > strictly a violation of the TCP specification, but > required to prevent denial-of-service attacks. In Linux > 2.2, the default value was 180. > > So I don=92t get why it stucks in the FIN_WAIT2 state anyway. > > You really need to have a packet capture during the outage (client and se= rver side) so you=92ll get over the wire chat and start speculating from th= ere. > No need to capture the beginning of the outage for now. All you have to d= o, is run a tcpdump for 10 minutes or so when you notice a client stuck. > > * I have not rebooted the NFS Server nor have I restarted nfsd, but do no= t believe that is required as these settings are at the TCP level and I wou= ld expect new sessions to use the updated settings. > > The issue occurred after 5 days following a reboot of the client machines= . > I ran the capture information again to make use of the situation. > > #!/bin/sh > > while true > do > /bin/date >> /tmp/nfs-hang.log > /bin/ps axHl | grep nfsd | grep -v grep >> /tmp/nfs-hang.log > /usr/bin/procstat -kk 2947 >> /tmp/nfs-hang.log > /usr/bin/procstat -kk 2944 >> /tmp/nfs-hang.log > /bin/sleep 60 > done > > > On the NFS Server > Active Internet connections (including servers) > Proto Recv-Q Send-Q Local Address Foreign Address (state) > tcp4 0 0 NFS.Server.IP.X.2049 NFS.Client.IP.X.48286 C= LOSE_WAIT > > On the NFS Client > tcp 0 0 NFS.Client.IP.X:48286 NFS.Server.IP.X:2049 = FIN_WAIT2 > > > > You had also asked for the output below. > > # nfsstat -E -s > BackChannelCtBindConnToSes > 0 0 > > # sysctl vfs.nfsd.request_space_throttle_count > vfs.nfsd.request_space_throttle_count: 0 > > I see that you are testing a patch and I look forward to seeing the resul= ts. > > > Jason Breitman > > > On Mar 21, 2021, at 6:21 PM, Rick Macklem > wrote: > > Youssef GHORBAL > wrote: >> Hi Jason, >> >>> On 17 Mar 2021, at 18:17, Jason Breitman > wrote: >>> >>> Please review the details below and let me know if there is a setting t= hat I should apply to my FreeBSD NFS Server or if there is a bug fix that I= can apply to resolve my issue. >>> I shared this information with the linux-nfs mailing list and they beli= eve the issue is on the server side. >>> >>> Issue >>> NFSv4 mounts periodically hang on the NFS Client. >>> >>> During this time, it is possible to manually mount from another NFS Ser= ver on the NFS Client having issues. >>> Also, other NFS Clients are successfully mounting from the NFS Server i= n question. >>> Rebooting the NFS Client appears to be the only solution. >> >> I had experienced a similar weird situation with periodically stuck Linu= x NFS clients >mounting Isilon NFS servers (Isilon is FreeBSD based but the= y seem to have there >own nfsd) > Yes, my understanding is that Isilon uses a proprietary user space nfsd a= nd > not the kernel based RPC and nfsd in FreeBSD. > >> We=92ve had better luck and we did manage to have packet captures on bot= h sides >during the issue. The gist of it goes like follows: >> >> - Data flows correctly between SERVER and the CLIENT >> - At some point SERVER starts decreasing it's TCP Receive Window until i= t reachs 0 >> - The client (eager to send data) can only ack data sent by SERVER. >> - When SERVER was done sending data, the client starts sending TCP Windo= w >Probes hoping that the TCP Window opens again so he can flush its buffer= s. >> - SERVER responds with a TCP Zero Window to those probes. > Having the window size drop to zero is not necessarily incorrect. > If the server is overloaded (has a backlog of NFS requests), it can stop = doing > soreceive() on the socket (so the socket rcv buffer can fill up and the T= CP window > closes). This results in "backpressure" to stop the NFS client from flood= ing the > NFS server with requests. > --> However, once the backlog is handled, the nfsd should start to sorece= ive() > again and this shouls cause the window to open back up. > --> Maybe this is broken in the socket/TCP code. I quickly got lost in > tcp_output() when it decides what to do about the rcvwin. > >> - After 6 minutes (the NFS server default Idle timeout) SERVER racefully= closes the >TCP connection sending a FIN Packet (and still a TCP Window 0) > This probably does not happen for Jason's case, since the 6minute timeout > is disabled when the TCP connection is assigned as a backchannel (most li= kely > the case for NFSv4.1). > >> - CLIENT ACK that FIN. >> - SERVER goes in FIN_WAIT_2 state >> - CLIENT closes its half part part of the socket and goes in LAST_ACK st= ate. >> - FIN is never sent by the client since there still data in its SendQ an= d receiver TCP >Window is still 0. At this stage the client starts sending = TCP Window Probes again >and again hoping that the server opens its TCP Win= dow so it can flush it's buffers >and terminate its side of the socket. >> - SERVER keeps responding with a TCP Zero Window to those probes. >> =3D> The last two steps goes on and on for hours/days freezing the NFS m= ount bound >to that TCP session. >> >> If we had a situation where CLIENT was responsible for closing the TCP W= indow (and >initiating the TCP FIN first) and server wanting to send data w= e=92ll end up in the same >state as you I think. >> >> We=92ve never had the root cause of why the SERVER decided to close the = TCP >Window and no more acccept data, the fix on the Isilon part was to rec= ycle more >aggressively the FIN_WAIT_2 sockets (net.inet.tcp.fast_finwait2_= recycle=3D1 & >net.inet.tcp.finwait2_timeout=3D5000). Once the socket recyc= led and at the next >occurence of CLIENT TCP Window probe, SERVER sends a R= ST, triggering the >teardown of the session on the client side, a new TCP h= andchake, etc and traffic >flows again (NFS starts responding) >> >> To avoid rebooting the client (and before the aggressive FIN_WAIT_2 was = >implemented on the Isilon side) we=92ve added a check script on the client= that detects >LAST_ACK sockets on the client and through iptables rule enf= orces a TCP RST, >Something like: -A OUTPUT -p tcp -d $nfs_server_addr --sp= ort $local_port -j REJECT >--reject-with tcp-reset (the script removes this= iptables rule as soon as the LAST_ACK >disappears) >> >> The bottom line would be to have a packet capture during the outage (cli= ent and/or >server side), it will show you at least the shape of the TCP ex= change when NFS is >stuck. > Interesting story and good work w.r.t. sluething, Youssef, thanks. > > I looked at Jason's log and it shows everything is ok w.r.t the nfsd thre= ads. > (They're just waiting for RPC requests.) > However, I do now think I know why the soclose() does not happen. > When the TCP connection is assigned as a backchannel, that takes a refere= nce > cnt on the structure. This refcnt won't be released until the connection = is > replaced by a BindConnectiotoSession operation from the client. But that = won't > happen until the client creates a new TCP connection. > --> No refcnt release-->no refcnt of 0-->no soclose(). > > I've created the attached patch (completely different from the previous o= ne) > that adds soshutdown(SHUT_WR) calls in the three places where the TCP > connection is going away. This seems to get it past CLOSE_WAIT without a > soclose(). > --> I know you are not comfortable with patching your server, but I do th= ink > this change will get the socket shutdown to complete. > > There are a couple more things you can check on the server... > # nfsstat -E -s > --> Look for the count under "BindConnToSes". > --> If non-zero, backchannels have been assigned > # sysctl -a | fgrep request_space_throttle_count > --> If non-zero, the server has been overloaded at some point. > > I think the attached patch might work around the problem. > The code that should open up the receive window needs to be checked. > I am also looking at enabling the 6minute timeout when a backchannel is > assigned. > > rick > > Youssef > > _______________________________________________ > freebsd-net@freebsd.org mailing list > https://urldefense.com/v3/__https://lists.freebsd.org/mailman/listinfo/fr= eebsd-net__;!!JFdNOqOXpB6UZW0!_c2MFNbir59GXudWPVdE5bNBm-qqjXeBuJ2UEmFv5OZci= Lj4ObR_drJNv5yryaERfIbhKR2d$ > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" > > > > > _______________________________________________ > freebsd-net@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" > _______________________________________________ > freebsd-net@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" _______________________________________________ freebsd-net@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"