From owner-freebsd-net@freebsd.org Sun Apr 4 20:29:04 2021 Return-Path: Delivered-To: freebsd-net@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id ABEAD5B393A for ; Sun, 4 Apr 2021 20:29:04 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from CAN01-TO1-obe.outbound.protection.outlook.com (mail-to1can01on0607.outbound.protection.outlook.com [IPv6:2a01:111:f400:fe5d::607]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mail.protection.outlook.com", Issuer "DigiCert Cloud Services CA-1" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4FD53g23Zwz3wH2; Sun, 4 Apr 2021 20:29:01 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=NdkXkc67Xhop8Z72E/eAKBCvMH6IwQrvyQTZ9GCVWbo8idu/dYkdvXjIavjMIDbd83AjI9exisWGpQUdwdTxoSzKWp3FNTjMDK++zd0QIipZeBhY04/ax7bQ0C3yjFVSRHHys0IoNV8AsAIXDMNB8qBZs/U5T4P1g/nRvcJYjzBWd3Hx+flrYVn1D3FkO8X3eYdLa/WISom3EI/MoTGnGBjdyYTFjTJNydTxAWRmxSelOi3jFhO/c2N44vqeaQkMT1dzDRmYi5ZbH30+uQ9PN+Z7rPhoPQ6aqtHXkeHdnuzkTbQncxRs27NwsooEX6jrjhg+fPnICysJNluHe0UDjw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=AKwomWjtRuKh6pGBHKrBdE0h7OGgBTX6hB/toADrtb8=; b=eysVz5ZM1PoI3MOXNZFx47rb45yiUW3BwzymjjpWL4pbs3qDTVBoRjoTwVxn1vwkmEq6u2DaksI5yKhZ/w2MMDulhio8c4Oo3XNyTyuO0qGTBJvIxl9TDKnXqO3dMRvDPdtCJ4o7vW0Mm2vEF2gahNuJVkRnFHTrDL55HcegZsctsFKsG6Ach+4Gv3XL9RThl/6xUjqG/pLzR7ztJ5MeuAAWCivV9gNODsSeIRENovnGVtNayfgCW9D1JLRW4/ix9q5vMJBB7f2MGngijlcymCGVuJ0paUygs95qE5BRbEQYHOhTglbToN8lv9OKbOIdj4XJslAibDGzXytc7TvmlA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=uoguelph.ca; dmarc=pass action=none header.from=uoguelph.ca; dkim=pass header.d=uoguelph.ca; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=uoguelph.ca; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=AKwomWjtRuKh6pGBHKrBdE0h7OGgBTX6hB/toADrtb8=; b=JJBCB1TtVmlgW+dXt49chcXaKiWCyv0LWEej5cfL02gcTdX5es3WULm03jg+YlAW/plEJvF2T8n7fVbGTzWsj2paqqCxzsZpQPobSyEwQBnU9vwQ09AzOY3wnk/YL6BPX+NAI6J0+a3CPIgEi+VhpZvHZ0qb36EsKlydmFWrZBZ5+V/fcK6D3X8EX1ZaNt5/3dS5VJ3ApsqUKLSMfMEutWqLWaI4A0lYV0FhiWLmJxOpWK1FxPAVEfa2wfo+icP56YJVumdKbGjA8LpWl4/KooL89Rxix5qM7hJCEvzXlpqMS6LgF481Wr+sKfi/CP981lEfSqFUhML+24qmj+Ow7w== Received: from YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM (2603:10b6:c00:19::29) by YQXPR01MB4309.CANPRD01.PROD.OUTLOOK.COM (2603:10b6:c01:7::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3999.32; Sun, 4 Apr 2021 20:29:00 +0000 Received: from YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM ([fe80::1c05:585a:132a:f08e]) by YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM ([fe80::1c05:585a:132a:f08e%4]) with mapi id 15.20.3999.032; Sun, 4 Apr 2021 20:28:59 +0000 From: Rick Macklem To: "tuexen@freebsd.org" CC: "Scheffenegger, Richard" , Youssef GHORBAL , "freebsd-net@freebsd.org" Subject: Re: NFS Mount Hangs Thread-Topic: NFS Mount Hangs Thread-Index: AQHXG1G2D7AHBwtmAkS1jBAqNNo2I6qMDIgAgALy8kyACNDugIAAsfOAgAfoFLeAARWpAIAAUOsEgAKJ2oCAADW73YAAG5EAgAA+DUI= Date: Sun, 4 Apr 2021 20:28:59 +0000 Message-ID: References: <3750001D-3F1C-4D9A-A9D9-98BCA6CA65A4@tildenparkcapital.com> <33693DE3-7FF8-4FAB-9A75-75576B88A566@tildenparkcapital.com> <8E745920-1092-4312-B251-B49D11FE8028@pasteur.fr> , <765CE1CD-6AAB-4BEF-97C6-C2A1F0FF4AC5@freebsd.org> In-Reply-To: <765CE1CD-6AAB-4BEF-97C6-C2A1F0FF4AC5@freebsd.org> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 322ff768-08d7-4a6d-edc3-08d8f7a8472a x-ms-traffictypediagnostic: YQXPR01MB4309: x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:226; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: sCpYPYnqoypxkBaXVrJalhgAmzsYhNit5PRcwfk50dNlCq1MruHRUJRMwSEaJk1NeGBRNEEju+kMcxm2ays695FyZR0QpkYKSZ3HoKcfPH9W55LLhqbzQmB+Xksoi/J8o2jEM6InIN47Br+vSANF9dUi/PRHcJ7JD9yrKJj8myhtMZy6T3A3szJCfDV+y4ipz0l05/2XKAj/OKgZmHs/jMPSkjvT9hkIGG0yn6xxQiPC3ZDfNf+sApdcXvtYdQTT2551Pk7uN+vQ41MbWykQ2vJe2LU02p4zpcrV6B7OSr/HGsCRZ4ZNNtImnbhWNzzGofGQW8RJqHfrm5vsV/SHXGu7AMcqNFSgjHXElrHEGAdke+bfmyRisixGqdfd90ZOmt7qZqZYoPI9P0l6l48w6mxAzDyPae+CNZK6h6fkwJ6gtxndWaZIni33ZfCpT+RCuER5sVOIv1XSyHpV5T6/Q2rdHwrlkAUX43zqOAZRxmx4yXH5SAhn5y4sYKOiKdTmCcTWJjh9cpX6i0r3cb2Tx3T1ZOrV/BpHTUjUfup8eRe2HCmzBLOQ6zxZuKDVaYnoJQF9/muLHz6mbKOlRg9qf3TGYpAe7XJ9vVvMF273//ST9QAxvPU9zqBK5N+HOiq8EpNCzoCIaU8ITsja1RngBOEdYr+SalJbdwrdEWKtyYdSHk0WepAlw9FPYXpz6tPzdux9kJ+lLEUq8UHZiHOUizanASRHc2u7t6nwhUWbn28= x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM; PTR:; CAT:NONE; SFS:(376002)(136003)(346002)(396003)(366004)(39850400004)(7696005)(8676002)(33656002)(38100700001)(316002)(83380400001)(8936002)(53546011)(6506007)(9686003)(4326008)(55016002)(2906002)(966005)(478600001)(3480700007)(86362001)(91956017)(186003)(7116003)(71200400001)(64756008)(66556008)(66946007)(66446008)(5660300002)(66476007)(54906003)(30864003)(52536014)(6916009)(76116006)(786003)(66574015)(579004)(559001); DIR:OUT; SFP:1101; x-ms-exchange-antispam-messagedata: =?Windows-1252?Q?yiCrLdRUG5VjbHhIciHpAyMSMR3hFbR4QOq4YoWB4/8wz+BMxVJfiswH?= =?Windows-1252?Q?u+/hmq1s79MyDpbkfEFUzP2Ny0+a9bZriSaTi1sBXniUJbLQ7bPv2wK2?= =?Windows-1252?Q?HC52E2c7QhiZiAbvGfZLyG7ls4drRdizQIbG1eAVld9v35RsUHaaOOeb?= =?Windows-1252?Q?qsLhrHp6MsFAwveCxQpC9Mp4gpfR+q0gmVIBQ6siTV69FUwKWk+NJnZK?= =?Windows-1252?Q?3JZI7M38yhYgnwMKkoWGkEuRRMMkRCGEN88vo5SmqXABILTFI06qe19K?= =?Windows-1252?Q?e83Itq5Z9no0f2F86tTc4fUblrpsUnsA14Mkp9LBXJh+j7wu+oXkWlaB?= =?Windows-1252?Q?i+84aeb/OWbaWYYlAivdM9FSo+waydJQcTbTMCMnJfXI2u045NBd0MQ/?= =?Windows-1252?Q?fCtGkjDifn9cbe4q323E+euecZEQ0Dmq/SpsTHUKDMATADXYxs9HPQYj?= =?Windows-1252?Q?rtyfPL43buWb4rZyRA5+6TAKvQ2Jtx1Tz/4r6mF3Bb7lgGWSzga6xJVR?= =?Windows-1252?Q?BPMC28by62FvN2usfNC6WSmxZMKghTR5kTZWpDi7Hl6aqJgR1nGxP2Hh?= =?Windows-1252?Q?Cq35qInYVfbt450rKeeW/NtDvZyor+lKzH/y2tk913FjbZpXuJl4Bd70?= =?Windows-1252?Q?tryPofnGruHFSFuowJHNQu1boehW8ZkGrQPYU7dmXt69O2pscAXEUvmL?= =?Windows-1252?Q?d5iauM3lhQyW9nNc5DUpqWtIEpwHMD7cS6GLpO9L0aTws/9+1c8xNAst?= =?Windows-1252?Q?8VVs0l1AHp+IDKqytX3X2BDnIK/jdVmhCHGdiK0V4WUo9HfjrSJgGrlS?= =?Windows-1252?Q?1PP00VOIa7l+5c+IupuvgYQPci+8wujSMXaHz554p77ZGNXKEUvgidwC?= =?Windows-1252?Q?StsBMVvSiIXoKwO9EVrekmCnGFvV/PO5/CJEm/bBb6Ce34xWZXhfi/S3?= =?Windows-1252?Q?AYyqHZhFMISC2OV7u24bW8MjZAOsU58XS2L+3h8SAeVN4y8HElTpyHS8?= =?Windows-1252?Q?NR+ldm8L0mCNh81yvf6ier8sPUPcIqnSjmHMn1PGlVCmoMTyiq9VViZ8?= =?Windows-1252?Q?5gvgrtvn9yhQcwhltYP5lHfIne5t4efuarhQdHrFk+3F2rRQLVDWaGpt?= =?Windows-1252?Q?TLyt4E0lAS/ACjz/R8qlG6xIwecvdsWL4CGtgypp8eUF011fCC+JzDHP?= =?Windows-1252?Q?o5ebseUhKIy/XV3Dxy8qZRrdLhLElPHgwdjfSTgY/sfif9o46fLUX46s?= =?Windows-1252?Q?wcOp4Tdhf7H41pIJ7F/tUtgKlSZuIiyfcn2+WbiBlfLomnSkyga6UFw9?= =?Windows-1252?Q?ibi6zK790rCqDzZtbD3cQlq96mCiSxAfh4g1C9JZVefcCvIIKMqsCsmG?= =?Windows-1252?Q?bKa3u1ZBG3PVtxvl4J1Q0YKOoFIWFYmkA2BctbYkBTAn21u8svXpVQn+?= =?Windows-1252?Q?VxqmXe2u0r5vU9A1Cldu31XPcqsesE+OLXipBqkvY28=3D?= x-ms-exchange-transport-forked: True Content-Type: text/plain; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: uoguelph.ca X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM X-MS-Exchange-CrossTenant-Network-Message-Id: 322ff768-08d7-4a6d-edc3-08d8f7a8472a X-MS-Exchange-CrossTenant-originalarrivaltime: 04 Apr 2021 20:28:59.1818 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: be62a12b-2cad-49a1-a5fa-85f4f3156a7d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: cW2DwbJthVTnHstUzPSbR+19oF53V8dPWWT/R0xdHjcPh+7aLZcKDJVY3vxFL+h2Od+GwfFQxc47VhgCr1Kb3Q== X-MS-Exchange-Transport-CrossTenantHeadersStamped: YQXPR01MB4309 X-Rspamd-Queue-Id: 4FD53g23Zwz3wH2 X-Spamd-Bar: ------ Authentication-Results: mx1.freebsd.org; dkim=pass header.d=uoguelph.ca header.s=selector1 header.b=JJBCB1Tt; arc=pass (microsoft.com:s=arcselector9901:i=1); dmarc=pass (policy=none) header.from=uoguelph.ca; spf=pass (mx1.freebsd.org: domain of rmacklem@uoguelph.ca designates 2a01:111:f400:fe5d::607 as permitted sender) smtp.mailfrom=rmacklem@uoguelph.ca X-Spamd-Result: default: False [-6.00 / 15.00]; TO_DN_EQ_ADDR_SOME(0.00)[]; RBL_DBL_DONT_QUERY_IPS(0.00)[2a01:111:f400:fe5d::607:from]; R_DKIM_ALLOW(-0.20)[uoguelph.ca:s=selector1]; FREEFALL_USER(0.00)[rmacklem]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[4]; TO_DN_SOME(0.00)[]; R_SPF_ALLOW(-0.20)[+ip6:2a01:111:f400::/48]; MIME_GOOD(-0.10)[text/plain]; NEURAL_HAM_LONG(-1.00)[-1.000]; SPAMHAUS_ZRD(0.00)[2a01:111:f400:fe5d::607:from:127.0.2.255]; RCVD_COUNT_THREE(0.00)[3]; TO_MATCH_ENVRCPT_SOME(0.00)[]; DWL_DNSWL_LOW(-1.00)[uoguelph.ca:dkim]; DKIM_TRACE(0.00)[uoguelph.ca:+]; DMARC_POLICY_ALLOW(-0.50)[uoguelph.ca,none]; NEURAL_HAM_SHORT(-1.00)[-1.000]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+]; RCVD_TLS_LAST(0.00)[]; ASN(0.00)[asn:8075, ipnet:2a01:111:f000::/36, country:US]; ARC_ALLOW(-1.00)[microsoft.com:s=arcselector9901:i=1]; MAILMAN_DEST(0.00)[freebsd-net] X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 04 Apr 2021 20:29:04 -0000 Oops, yes the packet capture is on freefall (forgot to mention that;-). You should be able to: % fetch https://people.freebsd.org/~rmacklem/linuxtofreenfs.pcap Some useful packet #s are: 1949 - partitioning starts 2005 - partition healed 2060 - last RST 2067 - SYN -> gets going again This was taken at the Linux end. I have FreeBSD end too, although I don't think it tells you anything more. Have fun with it, rick ________________________________________ From: tuexen@freebsd.org Sent: Sunday, April 4, 2021 12:41 PM To: Rick Macklem Cc: Scheffenegger, Richard; Youssef GHORBAL; freebsd-net@freebsd.org Subject: Re: NFS Mount Hangs CAUTION: This email originated from outside of the University of Guelph. Do= not click links or open attachments unless you recognize the sender and kn= ow the content is safe. If in doubt, forward suspicious emails to IThelp@uo= guelph.ca > On 4. Apr 2021, at 17:27, Rick Macklem wrote: > > Well, I'm going to cheat and top post, since this is elated info. and > not really part of the discussion... > > I've been testing network partitioning between a Linux client (5.2 kernel= ) > and a FreeBSD-current NFS server. I have not gotten a solid hang, but > I have had the Linux client doing "battle" with the FreeBSD server for > several minutes after un-partitioning the connection. > > The battle basically consists of the Linux client sending an RST, followe= d > by a SYN. > The FreeBSD server ignores the RST and just replies with the same old ack= . > --> This varies from "just a SYN" that succeeds to 100+ cycles of the abo= ve > over several minutes. > > I had thought that an RST was a "pretty heavy hammer", but FreeBSD seems > pretty good at ignoring it. > > A full packet capture of one of these is in /home/rmacklem/linuxtofreenfs= .pcap > in case anyone wants to look at it. On freefall? I would like to take a look at it... Best regards Michael > > Here's a tcpdump snippet of the interesting part (see the *** comments): > 19:10:09.305775 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.apex= -mesh: Flags [P.], seq 202585:202749, ack 212293, win 29128, options [nop,n= op,TS val 2073636037 ecr 2671204825], length 164: NFS reply xid 613153685 r= eply ok 160 getattr NON 4 ids 0/33554432 sz 0 > 19:10:09.305850 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick= .nfsd: Flags [.], ack 202749, win 501, options [nop,nop,TS val 2671204825 e= cr 2073636037], length 0 > *** Network is now partitioned... > > 19:10:09.407840 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick= .nfsd: Flags [P.], seq 212293:212525, ack 202749, win 501, options [nop,nop= ,TS val 2671204927 ecr 2073636037], length 232: NFS request xid 629930901 2= 28 getattr fh 0,1/53 > 19:10:09.615779 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick= .nfsd: Flags [P.], seq 212293:212525, ack 202749, win 501, options [nop,nop= ,TS val 2671205135 ecr 2073636037], length 232: NFS request xid 629930901 2= 28 getattr fh 0,1/53 > 19:10:09.823780 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick= .nfsd: Flags [P.], seq 212293:212525, ack 202749, win 501, options [nop,nop= ,TS val 2671205343 ecr 2073636037], length 232: NFS request xid 629930901 2= 28 getattr fh 0,1/53 > *** Lots of lines snipped. > > > 19:13:41.295783 ARP, Request who-has nfsv4-new3.home.rick tell nfsv4-linu= x.home.rick, length 28 > 19:13:42.319767 ARP, Request who-has nfsv4-new3.home.rick tell nfsv4-linu= x.home.rick, length 28 > 19:13:46.351966 ARP, Request who-has nfsv4-new3.home.rick tell nfsv4-linu= x.home.rick, length 28 > 19:13:47.375790 ARP, Request who-has nfsv4-new3.home.rick tell nfsv4-linu= x.home.rick, length 28 > 19:13:48.399786 ARP, Request who-has nfsv4-new3.home.rick tell nfsv4-linu= x.home.rick, length 28 > *** Network is now unpartitioned... > > 19:13:48.399990 ARP, Reply nfsv4-new3.home.rick is-at d4:be:d9:07:81:72 (= oui Unknown), length 46 > 19:13:48.400002 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick= .nfsd: Flags [S], seq 416692300, win 64240, options [mss 1460,sackOK,TS val= 2671421871 ecr 0,nop,wscale 7], length 0 > 19:13:48.400185 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.apex= -mesh: Flags [.], ack 212293, win 29127, options [nop,nop,TS val 2073855137= ecr 2671204825], length 0 > 19:13:48.400273 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick= .nfsd: Flags [R], seq 964161458, win 0, length 0 > 19:13:49.423833 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick= .nfsd: Flags [S], seq 416692300, win 64240, options [mss 1460,sackOK,TS val= 2671424943 ecr 0,nop,wscale 7], length 0 > 19:13:49.424056 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.apex= -mesh: Flags [.], ack 212293, win 29127, options [nop,nop,TS val 2073856161= ecr 2671204825], length 0 > *** This "battle" goes on for 223sec... > I snipped out 13 cycles of this "Linux sends an RST, followed by SYN" > "FreeBSD replies with same old ACK". In another test run I saw this > cycle continue non-stop for several minutes. This time, the Linux > client paused for a while (see ARPs below). > > 19:13:49.424101 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick= .nfsd: Flags [R], seq 964161458, win 0, length 0 > 19:13:53.455867 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick= .nfsd: Flags [S], seq 416692300, win 64240, options [mss 1460,sackOK,TS val= 2671428975 ecr 0,nop,wscale 7], length 0 > 19:13:53.455991 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.apex= -mesh: Flags [.], ack 212293, win 29127, options [nop,nop,TS val 2073860193= ecr 2671204825], length 0 > *** Snipped a bunch of stuff out, mostly ARPs, plus one more RST. > > 19:16:57.775780 ARP, Request who-has nfsv4-new3.home.rick tell nfsv4-linu= x.home.rick, length 28 > 19:16:57.775937 ARP, Reply nfsv4-new3.home.rick is-at d4:be:d9:07:81:72 (= oui Unknown), length 46 > 19:16:57.980240 ARP, Request who-has nfsv4-new3.home.rick tell 192.168.1.= 254, length 46 > 19:16:58.555663 ARP, Request who-has nfsv4-new3.home.rick tell 192.168.1.= 254, length 46 > 19:17:00.104701 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.apex= -mesh: Flags [F.], seq 202749, ack 212293, win 29128, options [nop,nop,TS v= al 2074046846 ecr 2671204825], length 0 > 19:17:15.664354 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.apex= -mesh: Flags [F.], seq 202749, ack 212293, win 29128, options [nop,nop,TS v= al 2074062406 ecr 2671204825], length 0 > 19:17:31.239246 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.apex= -mesh: Flags [R.], seq 202750, ack 212293, win 0, options [nop,nop,TS val 2= 074077981 ecr 2671204825], length 0 > *** FreeBSD finally acknowledges the RST 38sec after Linux sent the last > of 13 (100+ for another test run). > > 19:17:51.535979 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick= .nfsd: Flags [S], seq 4247692373, win 64240, options [mss 1460,sackOK,TS va= l 2671667055 ecr 0,nop,wscale 7], length 0 > 19:17:51.536130 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.apex= -mesh: Flags [S.], seq 661237469, ack 4247692374, win 65535, options [mss 1= 460,nop,wscale 6,sackOK,TS val 2074098278 ecr 2671667055], length 0 > *** Now back in business... > > 19:17:51.536218 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick= .nfsd: Flags [.], ack 1, win 502, options [nop,nop,TS val 2671667055 ecr 20= 74098278], length 0 > 19:17:51.536295 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick= .nfsd: Flags [P.], seq 1:233, ack 1, win 502, options [nop,nop,TS val 26716= 67056 ecr 2074098278], length 232: NFS request xid 629930901 228 getattr fh= 0,1/53 > 19:17:51.536346 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick= .nfsd: Flags [P.], seq 233:505, ack 1, win 502, options [nop,nop,TS val 267= 1667056 ecr 2074098278], length 272: NFS request xid 697039765 132 getattr = fh 0,1/53 > 19:17:51.536515 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.apex= -mesh: Flags [.], ack 505, win 29128, options [nop,nop,TS val 2074098279 ec= r 2671667056], length 0 > 19:17:51.536553 IP nfsv4-linux.home.rick.apex-mesh > nfsv4-new3.home.rick= .nfsd: Flags [P.], seq 505:641, ack 1, win 502, options [nop,nop,TS val 267= 1667056 ecr 2074098279], length 136: NFS request xid 730594197 132 getattr = fh 0,1/53 > 19:17:51.536562 IP nfsv4-new3.home.rick.nfsd > nfsv4-linux.home.rick.apex= -mesh: Flags [P.], seq 1:49, ack 505, win 29128, options [nop,nop,TS val 20= 74098279 ecr 2671667056], length 48: NFS reply xid 697039765 reply ok 44 ge= tattr ERROR: unk 10063 > > This error 10063 after the partition heals is also "bad news". It indicat= es the Session > (which is supposed to maintain "exactly once" RPC semantics is broken). I= 'll admit I > suspect a Linux client bug, but will be investigating further. > > So, hopefully TCP conversant folk can confirm if the above is correct beh= aviour > or if the RST should be ack'd sooner? > > I could also see this becoming a "forever" TCP battle for other versions = of Linux client. > > rick > > > ________________________________________ > From: Scheffenegger, Richard > Sent: Sunday, April 4, 2021 7:50 AM > To: Rick Macklem; tuexen@freebsd.org > Cc: Youssef GHORBAL; freebsd-net@freebsd.org > Subject: Re: NFS Mount Hangs > > CAUTION: This email originated from outside of the University of Guelph. = Do not click links or open attachments unless you recognize the sender and = know the content is safe. If in doubt, forward suspicious emails to IThelp@= uoguelph.ca > > > For what it=91s worth, suse found two bugs in the linux nfconntrack (stat= eful firewall), and pfifo-fast scheduler, which could conspire to make tcp = sessions hang forever. > > One is a missed updaten when the c=F6ient is not using the noresvport moi= nt option, which makes tje firewall think rsts are illegal (and drop them); > > The fast scheduler can run into an issue if only a single packet should b= e forwarded (note that this is not the default scheduler, but often recomme= nded for perf, as it runs lockless and lower cpu cost that pfq (default). I= f no other/additional packet pushes out that last packet of a flow, it can = become stuck forever... > > I can try getting the relevant bug info next week... > > ________________________________ > Von: owner-freebsd-net@freebsd.org im Auf= trag von Rick Macklem > Gesendet: Friday, April 2, 2021 11:31:01 PM > An: tuexen@freebsd.org > Cc: Youssef GHORBAL ; freebsd-net@freebsd.org= > Betreff: Re: NFS Mount Hangs > > NetApp Security WARNING: This is an external email. Do not click links or= open attachments unless you recognize the sender and know the content is s= afe. > > > > > tuexen@freebsd.org wrote: >>> On 2. Apr 2021, at 02:07, Rick Macklem wrote: >>> >>> I hope you don't mind a top post... >>> I've been testing network partitioning between the only Linux client >>> I have (5.2 kernel) and a FreeBSD server with the xprtdied.patch >>> (does soshutdown(..SHUT_WR) when it knows the socket is broken) >>> applied to it. >>> >>> I'm not enough of a TCP guy to know if this is useful, but here's what >>> I see... >>> >>> While partitioned: >>> On the FreeBSD server end, the socket either goes to CLOSED during >>> the network partition or stays ESTABLISHED. >> If it goes to CLOSED you called shutdown(, SHUT_WR) and the peer also >> sent a FIN, but you never called close() on the socket. >> If the socket stays in ESTABLISHED, there is no communication ongoing, >> I guess, and therefore the server does not even detect that the peer >> is not reachable. >>> On the Linux end, the socket seems to remain ESTABLISHED for a >>> little while, and then disappears. >> So how does Linux detect the peer is not reachable? > Well, here's what I see in a packet capture in the Linux client once > I partition it (just unplug the net cable): > - lots of retransmits of the same segment (with ACK) for 54sec > - then only ARP queries > > Once I plug the net cable back in: > - ARP works > - one more retransmit of the same segement > - receives RST from FreeBSD > ** So, is this now a "new" TCP connection, despite > using the same port#. > --> It matters for NFS, since "new connection" > implies "must retry all outstanding RPCs". > - sends SYN > - receives SYN, ACK from FreeBSD > --> connection starts working again > Always uses same port#. > > On the FreeBSD server end: > - receives the last retransmit of the segment (with ACK) > - sends RST > - receives SYN > - sends SYN, ACK > > I thought that there was no RST in the capture I looked at > yesterday, so I'm not sure if FreeBSD always sends an RST, > but the Linux client behaviour was the same. (Sent a SYN, etc). > The socket disappears from the Linux "netstat -a" and I > suspect that happens after about 54sec, but I am not sure > about the timing. > >>> >>> After unpartitioning: >>> On the FreeBSD server end, you get another socket showing up at >>> the same port# >>> Active Internet connections (including servers) >>> Proto Recv-Q Send-Q Local Address Foreign Address (stat= e) >>> tcp4 0 0 nfsv4-new3.nfsd nfsv4-linux.678 ESTAB= LISHED >>> tcp4 0 0 nfsv4-new3.nfsd nfsv4-linux.678 CLOSE= D >>> >>> The Linux client shows the same connection ESTABLISHED. > But disappears from "netstat -a" for a while during the partitioning. > >>> (The mount sometimes reports an error. I haven't looked at packet >>> traces to see if it retries RPCs or why the errors occur.) > I have now done so, as above. > >>> --> However I never get hangs. >>> Sometimes it goes to SYN_SENT for a while and the FreeBSD server >>> shows FIN_WAIT_1, but then both ends go to ESTABLISHED and the >>> mount starts working again. >>> >>> The most obvious thing is that the Linux client always keeps using >>> the same port#. (The FreeBSD client will use a different port# when >>> it does a TCP reconnect after no response from the NFS server for >>> a little while.) >>> >>> What do those TCP conversant think? >> I guess you are you are never calling close() on the socket, for with >> the connection state is CLOSED. > Ok, that makes sense. For this case the Linux client has not done a > BindConnectionToSession to re-assign the back channel. > I'll have to bug them about this. However, I'll bet they'll answer > that I have to tell them the back channel needs re-assignment > or something like that. > > I am pretty certain they are broken, in that the client needs to > retry all outstanding RPCs. > > For others, here's the long winded version of this that I just > put on the phabricator review: > In the server side kernel RPC, the socket (struct socket *) is in a > structure called SVCXPRT (normally pointed to by "xprt"). > These structures a ref counted and the soclose() is done > when the ref. cnt goes to zero. My understanding is that > "struct socket *" is free'd by soclose() so this cannot be done > before the xprt ref. cnt goes to zero. > > For NFSv4.1/4.2 there is something called a back channel > which means that a "xprt" is used for server->client RPCs, > although the TCP connection is established by the client > to the server. > --> This back channel holds a ref cnt on "xprt" until the > > client re-assigns it to a different TCP connection > via an operation called BindConnectionToSession > and the Linux client is not doing this soon enough, > it appears. > > So, the soclose() is delayed, which is why I think the > TCP connection gets stuck in CLOSE_WAIT and that is > why I've added the soshutdown(..SHUT_WR) calls, > which can happen before the client gets around to > re-assigning the back channel. > > Thanks for your help with this Michael, rick > > Best regards > Michael >> >> rick >> ps: I can capture packets while doing this, if anyone has a use >> for them. >> >> >> >> >> >> >> ________________________________________ >> From: owner-freebsd-net@freebsd.org on b= ehalf of Youssef GHORBAL >> Sent: Saturday, March 27, 2021 6:57 PM >> To: Jason Breitman >> Cc: Rick Macklem; freebsd-net@freebsd.org >> Subject: Re: NFS Mount Hangs >> >> CAUTION: This email originated from outside of the University of Guelph.= Do not click links or open attachments unless you recognize the sender and= know the content is safe. If in doubt, forward suspicious emails to IThelp= @uoguelph.ca >> >> >> >> >> On 27 Mar 2021, at 13:20, Jason Breitman > wrote: >> >> The issue happened again so we can say that disabling TSO and LRO on the= NIC did not resolve this issue. >> # ifconfig lagg0 -rxcsum -rxcsum6 -txcsum -txcsum6 -lro -tso -vlanhwtso >> # ifconfig lagg0 >> lagg0: flags=3D8943 metr= ic 0 mtu 1500 >> options=3D8100b8 >> >> We can also say that the sysctl settings did not resolve this issue. >> >> # sysctl net.inet.tcp.fast_finwait2_recycle=3D1 >> net.inet.tcp.fast_finwait2_recycle: 0 -> 1 >> >> # sysctl net.inet.tcp.finwait2_timeout=3D1000 >> net.inet.tcp.finwait2_timeout: 60000 -> 1000 >> >> I don=92t think those will do anything in your case since the FIN_WAIT2 = are on the client side and those sysctls are for BSD. >> By the way it seems that Linux recycles automatically TCP sessions in FI= N_WAIT2 after 60 seconds (sysctl net.ipv4.tcp_fin_timeout) >> >> tcp_fin_timeout (integer; default: 60; since Linux 2.2) >> This specifies how many seconds to wait for a final FIN >> packet before the socket is forcibly closed. This is >> strictly a violation of the TCP specification, but >> required to prevent denial-of-service attacks. In Linux >> 2.2, the default value was 180. >> >> So I don=92t get why it stucks in the FIN_WAIT2 state anyway. >> >> You really need to have a packet capture during the outage (client and s= erver side) so you=92ll get over the wire chat and start speculating from t= here. >> No need to capture the beginning of the outage for now. All you have to = do, is run a tcpdump for 10 minutes or so when you notice a client stuck. >> >> * I have not rebooted the NFS Server nor have I restarted nfsd, but do n= ot believe that is required as these settings are at the TCP level and I wo= uld expect new sessions to use the updated settings. >> >> The issue occurred after 5 days following a reboot of the client machine= s. >> I ran the capture information again to make use of the situation. >> >> #!/bin/sh >> >> while true >> do >> /bin/date >> /tmp/nfs-hang.log >> /bin/ps axHl | grep nfsd | grep -v grep >> /tmp/nfs-hang.log >> /usr/bin/procstat -kk 2947 >> /tmp/nfs-hang.log >> /usr/bin/procstat -kk 2944 >> /tmp/nfs-hang.log >> /bin/sleep 60 >> done >> >> >> On the NFS Server >> Active Internet connections (including servers) >> Proto Recv-Q Send-Q Local Address Foreign Address (state= ) >> tcp4 0 0 NFS.Server.IP.X.2049 NFS.Client.IP.X.48286 = CLOSE_WAIT >> >> On the NFS Client >> tcp 0 0 NFS.Client.IP.X:48286 NFS.Server.IP.X:2049 = FIN_WAIT2 >> >> >> >> You had also asked for the output below. >> >> # nfsstat -E -s >> BackChannelCtBindConnToSes >> 0 0 >> >> # sysctl vfs.nfsd.request_space_throttle_count >> vfs.nfsd.request_space_throttle_count: 0 >> >> I see that you are testing a patch and I look forward to seeing the resu= lts. >> >> >> Jason Breitman >> >> >> On Mar 21, 2021, at 6:21 PM, Rick Macklem > wrote: >> >> Youssef GHORBAL > wrote: >>> Hi Jason, >>> >>>> On 17 Mar 2021, at 18:17, Jason Breitman > wrote: >>>> >>>> Please review the details below and let me know if there is a setting = that I should apply to my FreeBSD NFS Server or if there is a bug fix that = I can apply to resolve my issue. >>>> I shared this information with the linux-nfs mailing list and they bel= ieve the issue is on the server side. >>>> >>>> Issue >>>> NFSv4 mounts periodically hang on the NFS Client. >>>> >>>> During this time, it is possible to manually mount from another NFS Se= rver on the NFS Client having issues. >>>> Also, other NFS Clients are successfully mounting from the NFS Server = in question. >>>> Rebooting the NFS Client appears to be the only solution. >>> >>> I had experienced a similar weird situation with periodically stuck Lin= ux NFS clients >mounting Isilon NFS servers (Isilon is FreeBSD based but th= ey seem to have there >own nfsd) >> Yes, my understanding is that Isilon uses a proprietary user space nfsd = and >> not the kernel based RPC and nfsd in FreeBSD. >> >>> We=92ve had better luck and we did manage to have packet captures on bo= th sides >during the issue. The gist of it goes like follows: >>> >>> - Data flows correctly between SERVER and the CLIENT >>> - At some point SERVER starts decreasing it's TCP Receive Window until = it reachs 0 >>> - The client (eager to send data) can only ack data sent by SERVER. >>> - When SERVER was done sending data, the client starts sending TCP Wind= ow >Probes hoping that the TCP Window opens again so he can flush its buffe= rs. >>> - SERVER responds with a TCP Zero Window to those probes. >> Having the window size drop to zero is not necessarily incorrect. >> If the server is overloaded (has a backlog of NFS requests), it can stop= doing >> soreceive() on the socket (so the socket rcv buffer can fill up and the = TCP window >> closes). This results in "backpressure" to stop the NFS client from floo= ding the >> NFS server with requests. >> --> However, once the backlog is handled, the nfsd should start to sorec= eive() >> again and this shouls cause the window to open back up. >> --> Maybe this is broken in the socket/TCP code. I quickly got lost in >> tcp_output() when it decides what to do about the rcvwin. >> >>> - After 6 minutes (the NFS server default Idle timeout) SERVER racefull= y closes the >TCP connection sending a FIN Packet (and still a TCP Window 0= ) >> This probably does not happen for Jason's case, since the 6minute timeou= t >> is disabled when the TCP connection is assigned as a backchannel (most l= ikely >> the case for NFSv4.1). >> >>> - CLIENT ACK that FIN. >>> - SERVER goes in FIN_WAIT_2 state >>> - CLIENT closes its half part part of the socket and goes in LAST_ACK s= tate. >>> - FIN is never sent by the client since there still data in its SendQ a= nd receiver TCP >Window is still 0. At this stage the client starts sending= TCP Window Probes again >and again hoping that the server opens its TCP Wi= ndow so it can flush it's buffers >and terminate its side of the socket. >>> - SERVER keeps responding with a TCP Zero Window to those probes. >>> =3D> The last two steps goes on and on for hours/days freezing the NFS = mount bound >to that TCP session. >>> >>> If we had a situation where CLIENT was responsible for closing the TCP = Window (and >initiating the TCP FIN first) and server wanting to send data = we=92ll end up in the same >state as you I think. >>> >>> We=92ve never had the root cause of why the SERVER decided to close the= TCP >Window and no more acccept data, the fix on the Isilon part was to re= cycle more >aggressively the FIN_WAIT_2 sockets (net.inet.tcp.fast_finwait2= _recycle=3D1 & >net.inet.tcp.finwait2_timeout=3D5000). Once the socket recy= cled and at the next >occurence of CLIENT TCP Window probe, SERVER sends a = RST, triggering the >teardown of the session on the client side, a new TCP = handchake, etc and traffic >flows again (NFS starts responding) >>> >>> To avoid rebooting the client (and before the aggressive FIN_WAIT_2 was= >implemented on the Isilon side) we=92ve added a check script on the clien= t that detects >LAST_ACK sockets on the client and through iptables rule en= forces a TCP RST, >Something like: -A OUTPUT -p tcp -d $nfs_server_addr --s= port $local_port -j REJECT >--reject-with tcp-reset (the script removes thi= s iptables rule as soon as the LAST_ACK >disappears) >>> >>> The bottom line would be to have a packet capture during the outage (cl= ient and/or >server side), it will show you at least the shape of the TCP e= xchange when NFS is >stuck. >> Interesting story and good work w.r.t. sluething, Youssef, thanks. >> >> I looked at Jason's log and it shows everything is ok w.r.t the nfsd thr= eads. >> (They're just waiting for RPC requests.) >> However, I do now think I know why the soclose() does not happen. >> When the TCP connection is assigned as a backchannel, that takes a refer= ence >> cnt on the structure. This refcnt won't be released until the connection= is >> replaced by a BindConnectiotoSession operation from the client. But that= won't >> happen until the client creates a new TCP connection. >> --> No refcnt release-->no refcnt of 0-->no soclose(). >> >> I've created the attached patch (completely different from the previous = one) >> that adds soshutdown(SHUT_WR) calls in the three places where the TCP >> connection is going away. This seems to get it past CLOSE_WAIT without a >> soclose(). >> --> I know you are not comfortable with patching your server, but I do t= hink >> this change will get the socket shutdown to complete. >> >> There are a couple more things you can check on the server... >> # nfsstat -E -s >> --> Look for the count under "BindConnToSes". >> --> If non-zero, backchannels have been assigned >> # sysctl -a | fgrep request_space_throttle_count >> --> If non-zero, the server has been overloaded at some point. >> >> I think the attached patch might work around the problem. >> The code that should open up the receive window needs to be checked. >> I am also looking at enabling the 6minute timeout when a backchannel is >> assigned. >> >> rick >> >> Youssef >> >> _______________________________________________ >> freebsd-net@freebsd.org mailing list >> https://urldefense.com/v3/__https://lists.freebsd.org/mailman/listinfo/f= reebsd-net__;!!JFdNOqOXpB6UZW0!_c2MFNbir59GXudWPVdE5bNBm-qqjXeBuJ2UEmFv5OZc= iLj4ObR_drJNv5yryaERfIbhKR2d$ >> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" >> >> >> >> >> _______________________________________________ >> freebsd-net@freebsd.org mailing list >> https://lists.freebsd.org/mailman/listinfo/freebsd-net >> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" >> _______________________________________________ >> freebsd-net@freebsd.org mailing list >> https://lists.freebsd.org/mailman/listinfo/freebsd-net >> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" > > _______________________________________________ > freebsd-net@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" > _______________________________________________ > freebsd-net@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"