From nobody Thu May 5 14:49:32 2022 X-Original-To: stable@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 59CA81ACD2F9 for ; Thu, 5 May 2022 14:49:40 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from CAN01-YT3-obe.outbound.protection.outlook.com (mail-yt3can01on2087.outbound.protection.outlook.com [40.107.115.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mail.protection.outlook.com", Issuer "DigiCert Cloud Services CA-1" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4KvGnH2jYMz3LtN; Thu, 5 May 2022 14:49:39 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=TFirVbDIM00Ro72W83hHz3uwPQD82t+KjYD1PThcv92NbebyW2kuMYNJBofP1ObqPpXj8KrjPNWAzjc7PnVGSZ3X14OHmPKWgw13+cCAb09RYSvIw+iX10phVqLbC0lKna35v8i6oboTp/h8oJdm7hNAPP0HK3dUoGfPBKgQoE0qGLOECZJpnU5WvwmoycnFuryz2/Rh3nqBD8ThVl6mSCZGUv05GLnOjP/RmgUt/xToAbnE0g9Or04gpWfaqGOo60Z1JSstmQ90SeEw3RkfFDsz9bztTee+A6aV++Ue4pMeIlftBIjESQJo7xzrrgwCcIeaH6zHIyB++pQTzBOHxQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=qaXezi1gUzlrzZZ7R9cHXnOKHXjx5qQx9t3Eazro8KI=; b=IJdr2QBLAxd5sKFpb7CfremZt8x3oITDTpBEGTFIUfv7qWkWPvEQao6lsgscbcAc/m1vlLKdm4RSTQ+yZ8Z776dkctQbP2zVg+hDC7Xf4d/2nM6cNteEyYFKCl80AJfRvmhJPli1ypIIl7aNQGpRYSmyIL+RyU/fz7PXCOwWqOtAokRxzwlLRqt0SJOlNJhvgkRzzlOoAgp3sOPyWZeKuN5mBZBjXFd4xKDBd3nGuAABG7ab5nr/eNC+MAq0OuAi4Yf5DWQZYHH/fnyr1KMuVWDmS3asEHQ1UQkyVc7tLRp8dTIZ6tcBh7OsQncBzLB8Q0mYvrKw/Ml+mcG7zKyRug== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=uoguelph.ca; dmarc=pass action=none header.from=uoguelph.ca; dkim=pass header.d=uoguelph.ca; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=uoguelph.ca; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=qaXezi1gUzlrzZZ7R9cHXnOKHXjx5qQx9t3Eazro8KI=; b=IU0sg+wSiHqrSVlB5SGnrxc5+uXontJbYSgIcKXenKo9rAGCTn5NslnH9p0bKrzK1bXxU4Yc2uYUkegegsQK9B1OYdBOlcgBO9kCQtv+a72WYpvmUWp2KYCzCLniaZ2mPSA8nECJlnYQai7Gmwr18yB2Evh0OyPxJDhSep7ZDMbJCffm6VmoXcUhsBrlsgx5+2JZO2hZYf4dD24AZQw7xYpnS9W9sQSxi1GYZ5NDMH2GN/mmqQ8d1vwqZ0xdz6A1dckWbKVxqsOY/LpHWs21njDoI8mstHpR2mDdC2SjTvv8L1RdFe3HrlA2oqAh/nq2sWisjgnX8WNSIAVrSe5VZg== Received: from YT2PR01MB9730.CANPRD01.PROD.OUTLOOK.COM (2603:10b6:b01:de::14) by YT1PR01MB4632.CANPRD01.PROD.OUTLOOK.COM (2603:10b6:b01:43::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5206.25; Thu, 5 May 2022 14:49:32 +0000 Received: from YT2PR01MB9730.CANPRD01.PROD.OUTLOOK.COM ([fe80::fdb1:ada8:7af0:3003]) by YT2PR01MB9730.CANPRD01.PROD.OUTLOOK.COM ([fe80::fdb1:ada8:7af0:3003%6]) with mapi id 15.20.5206.027; Thu, 5 May 2022 14:49:32 +0000 From: Rick Macklem To: Alan Somers CC: FreeBSD Stable ML Subject: Re: nfs client's OpenOwner count increases without bounds Thread-Topic: nfs client's OpenOwner count increases without bounds Thread-Index: AQHYX/oEvZae9laYBUaMKYABkmfK960PSuzXgAAZa4CAAA4SAoAAI6WAgADCKiI= Date: Thu, 5 May 2022 14:49:32 +0000 Message-ID: References: In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: suggested_attachment_session_id: 44138e25-cad5-d5a8-37ca-a201d901eca5 x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 7767b27b-c6c9-463f-4fa5-08da2ea67706 x-ms-traffictypediagnostic: YT1PR01MB4632:EE_ x-microsoft-antispam-prvs: x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: vuHnVf5FzBVYub4PiCTnGBzP+zK0Ri4OdqjoBW6QJ0nRLBa6geYGNOjn1/eNstJqzRP0xuGo4m0DMcdKJtGIIyG25cgjsCWK+4RxmyI0+v9y5lM+0MGiMHdcMSD6tI3HBrvu4nKjjhckgov/jKDgjYYUH1kmNW1MjnDEmAjz46rcWUpQMzMJgXpnoqOKx21q0ZCk8zkGlH51Ck6Sv1WqSt263Psb+Hkf5kh3BLBgMp3NeenqZ3JXZXrsLi2nE54kc3JAhG4cphiiP+AhiEAGl2/aXbwgCS2VR18eikC2TsoBF/uGmcnwkEsBfJgWm62j7Vt4WFIe+JHdOA7jaAwXjsV7fbx8qoYf4UuCjRKcdOHiWHz4v1KFklNkei912tNpxS6BIlqdlGsTXUuPumcteYnrs9/LrEduk0uHXI0OxXAYIn6mG66/7DbPXNpAoKyqgDReenYq27zC2VdOw2zS8KQWEYOi2E4KUqXwzIW4Gr8sjEK5gF62Y/7vJ50X9+kTSjeF3td/2gXj4Jn7DXfcK7Bk3vVNXmnUY2ov4wk6ihHIKSmYKiecxDryNw/vQ+p6C4KU8QMFjT0CPdV2acFP+A0wfL8FKjGqx6u5IqAUkpARAzg+jHsD3py+2aBCSJ/s6A5uzmCc/GMZ8p/ha1HjClb6p9vqeY951VzQQnc6HsbnFL89vb9xn8+Zg4T3gsOj8ab8eYLL8UkGxA2umpi6+g== x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:YT2PR01MB9730.CANPRD01.PROD.OUTLOOK.COM;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(2906002)(53546011)(55016003)(786003)(76116006)(316002)(450100002)(6506007)(508600001)(6916009)(66556008)(64756008)(66476007)(4326008)(66446008)(8676002)(91956017)(83380400001)(66946007)(186003)(71200400001)(86362001)(52536014)(122000001)(33656002)(5660300002)(38070700005)(9686003)(38100700002)(8936002)(7696005);DIR:OUT;SFP:1101; x-ms-exchange-antispam-messagedata-chunkcount: 2 x-ms-exchange-antispam-messagedata-0: =?iso-8859-1?Q?lXJS+LmVO84Ut1awcx+AVl09rUwgrSHeqyZtHr8C3vntTIsTd6H/de8Pqh?= =?iso-8859-1?Q?/AXDbQiW5JoUEDMbxGeCVmrciso4+gzT/vwXFOd89jS3m5PstCHXv+c3Q6?= =?iso-8859-1?Q?KjNcWQjbZqCB+sCo9lt8434roAjezYWUbZ8Ruw6RwWjR+wAJnjRj1O6gqw?= =?iso-8859-1?Q?evTDwncyLTP4lyVydbaOz/RALfNeWuBNDilvRmTsa2NFWF2E0oKUfcPN9m?= =?iso-8859-1?Q?bkXR/sEIWPElAESAtYL4OOF/obpSTI+TJP2tJ9n3tHuEtvnplWhNoPjMaW?= =?iso-8859-1?Q?larmsRvlAGHdfN1eF4Br8lCX/DIZOSC8XTaiO3/8nrOHPB1Q0dDNxlw/fA?= =?iso-8859-1?Q?Da37C7KC1zQaJSVx3CjYAowQbDGlaCt35ik86LGCJA42xyYSunJQDUBVok?= =?iso-8859-1?Q?vnmluP/8mV/p2fuBTf8c7Zl2QAvPld1SKMYvIgqu1Eis6ioGgaP15W8Hzc?= =?iso-8859-1?Q?3ASsDoBrV550U7wZ0016YRKVvEpQKKJ4WR7YwHvlMEOSNLPgO7/nJfVA37?= =?iso-8859-1?Q?E4g6Dh/JJ1er4sTCntE4VemaQexDKIT8Gu5meh56ShBND9TKMtW2sEq39T?= =?iso-8859-1?Q?z04wtLwSNzvkeaM3R7XblEhmWF52qKr79chFhaESMYtHep2SETEg6x1rGp?= =?iso-8859-1?Q?DuLgKjP6rFiqlgKlCtIavCB7RFFjnZN9v7JYQNLBFlBMUKpTFBYKv4ezfF?= =?iso-8859-1?Q?IjR6ztpnNmeKxYvEHjGOTaXUD3gtI4AqYJo7BTnXFoBcU2jsMlUj1ywQuo?= =?iso-8859-1?Q?+N6JSTp0pcyV9Cs4s2oum+JXCg1Bl3kabDyygXi1SFTe24TaZY5NRkmA5M?= =?iso-8859-1?Q?vzWWo/ZAG8aO3/0JLHaKidhvN1iTxDWCm44XRt1R5AbFO9Ey49NAlYtdq0?= =?iso-8859-1?Q?TxGSJb2DPP+hUIQTx8H8vtZL1eLTNlG7ovtPMPElSlE7KY+aQt/8EPghiZ?= =?iso-8859-1?Q?pmYGPBlg3aSCtSksJNi18f+QahaPQ0IZ4FKcCJH9tKkGwqlPVUl5WDplog?= =?iso-8859-1?Q?Gi8o3pfk/rIt9/ZrV5CtqK8UYLnA3YfUS1B6k3zlz4p5igvXtkqnUKlZl7?= =?iso-8859-1?Q?uFReeCJJJ+sKmEgpFATlsGvkis81SsTUNscOo4Xp6d8H0/2ClFfVL3A4Re?= =?iso-8859-1?Q?+AfCHP569Mnq4v4Qv1sHrJbslwbFKbfETG82vA6krzaLgx2mXssM+Nd2YP?= =?iso-8859-1?Q?ZEc+FJrGgrWDn1MawSwWqHVqz+i+uqUpGk6EyX2cyacDcHkTE5UbcqYskf?= =?iso-8859-1?Q?VrzEYmVcD4EOgd5BUdSFfFrIO+RedC9rIgjtfeLXwq+Nqea64GnRHzDcRW?= =?iso-8859-1?Q?LQ+xWxO4hKpyqGY3RRxeFfVOGA0A4X61qjUxHEtD1eP3cTlLOx0MuqDAxf?= =?iso-8859-1?Q?OUgEfUOA/BQtW+lTKUuhZBNEf3zfjmduI+/5mfxWZCziw9kOQumZXnJptK?= =?iso-8859-1?Q?lFwdnYdd2ifMbo7KK3xF7tU0icrOyN0eN4PtTcI4omFKUIN433m0njub4s?= =?iso-8859-1?Q?Uf41jzphpPPr5NITtq8FlR3tz+xDlT6cx2H9e/lyyvBpXrti2pPmzWhNh2?= =?iso-8859-1?Q?DJvOGp7NLNY+IbMglP7fimFwWgL7J0eYUKAQ+mBnGeGVOM/U2AT7tgFOvK?= =?iso-8859-1?Q?j3CAbNOIfj7qs0ib6ZwC5CT9dZ/lB1HvBwIlwijWdIVZL0nEX42laDvpgv?= =?iso-8859-1?Q?YoWVQOOxg/+mLnOTj/vqVVux2IBMo/JIhdyIBOiUXspHwxEORD2kJrTc4o?= =?iso-8859-1?Q?SkMovatcXE+jsxZKQhyXa2mdthSV6w3OOlSQiroPFniv8IMsqtMwKS4VgC?= =?iso-8859-1?Q?8EAfj6VzLIJLu9UcDg1SbgEs+m/342zgwqOh2MvFMhmr7SuBFpTLrLBEK3?= =?iso-8859-1?Q?j3?= x-ms-exchange-antispam-messagedata-1: bzTP+CtxDjzUTQ== Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable List-Id: Production branch of FreeBSD source code List-Archive: https://lists.freebsd.org/archives/freebsd-stable List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-stable@freebsd.org X-BeenThere: freebsd-stable@freebsd.org MIME-Version: 1.0 X-OriginatorOrg: uoguelph.ca X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: YT2PR01MB9730.CANPRD01.PROD.OUTLOOK.COM X-MS-Exchange-CrossTenant-Network-Message-Id: 7767b27b-c6c9-463f-4fa5-08da2ea67706 X-MS-Exchange-CrossTenant-originalarrivaltime: 05 May 2022 14:49:32.2541 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: be62a12b-2cad-49a1-a5fa-85f4f3156a7d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: e1Y2fhG9zDm/JJEHkxbA4IHqLOSrB+2C2Js1KlxS+FmQ45l1VfPefaLTFy1KKSVLEjkOghcuMlxiSFD0F2ibLw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: YT1PR01MB4632 X-Rspamd-Queue-Id: 4KvGnH2jYMz3LtN X-Spamd-Bar: ----- Authentication-Results: mx1.freebsd.org; dkim=pass header.d=uoguelph.ca header.s=selector2 header.b=IU0sg+wS; arc=pass ("microsoft.com:s=arcselector9901:i=1"); dmarc=pass (policy=none) header.from=uoguelph.ca; spf=pass (mx1.freebsd.org: domain of rmacklem@uoguelph.ca designates 40.107.115.87 as permitted sender) smtp.mailfrom=rmacklem@uoguelph.ca X-Spamd-Result: default: False [-5.79 / 15.00]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; R_DKIM_ALLOW(-0.20)[uoguelph.ca:s=selector2]; FREEFALL_USER(0.00)[rmacklem]; FROM_HAS_DN(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; R_SPF_ALLOW(-0.20)[+ip4:40.107.0.0/16]; MIME_GOOD(-0.10)[text/plain]; NEURAL_HAM_LONG(-1.00)[-1.000]; DWL_DNSWL_LOW(-1.00)[uoguelph.ca:dkim]; RCVD_COUNT_THREE(0.00)[3]; TO_DN_ALL(0.00)[]; DKIM_TRACE(0.00)[uoguelph.ca:+]; RCPT_COUNT_TWO(0.00)[2]; RCVD_IN_DNSWL_NONE(0.00)[40.107.115.87:from]; NEURAL_HAM_SHORT(-0.79)[-0.791]; DMARC_POLICY_ALLOW(-0.50)[uoguelph.ca,none]; MLMMJ_DEST(0.00)[stable]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+]; RCVD_TLS_LAST(0.00)[]; ASN(0.00)[asn:8075, ipnet:40.104.0.0/14, country:US]; ARC_ALLOW(-1.00)[microsoft.com:s=arcselector9901:i=1]; RWL_MAILSPIKE_POSSIBLE(0.00)[40.107.115.87:from] X-ThisMailContainsUnwantedMimeParts: N Alan Somers wrote:=0A= > On Wed, May 4, 2022 at 6:56 PM Rick Macklem wrote:= =0A= > >=0A= > > Alan Somers wrote:=0A= > > > On Wed, May 4, 2022 at 5:23 PM Rick Macklem wr= ote:=0A= > > > >=0A= > > > > Alan Somers wrote:=0A= > > > > > I have a FreeBSD 13 (tested on both 13.0-RELEASE and 13.1-RC5) de= sktop=0A= > > > > > mounting /usr/home over NFS 4.2 from an 13.0-RELEASE server. It= =0A= > > > > > worked fine until a few weeks ago. Now, the desktop's performanc= e=0A= > > > > > slowly degrades. It becomes less and less responsive until I res= tart=0A= > > > > > X after 2-3 days. /var/log/Xorg.0.log shows plenty of entries li= ke=0A= > > > > > "AT keyboard: client bug: event processing lagging behind by 112m= s,=0A= > > > > > your system is too slow". "top -S" shows that the busiest proces= s is=0A= > > > > > nfscl. A dtrace profile shows that nfscl is spending most of its= time=0A= > > > > > in nfscl_cleanup_common, in the loop over all nfsclowner objects.= =0A= > > > > > Running "nfsdumpstate" on the server shows thousands of OpenOwner= s for=0A= > > > > > that client, and < 10 for any other NFS client. The OpenOwners= =0A= > > > > > increases by about 3000 per day. And yet, "fstat" shows only a c= ouple=0A= > > > > > hundred open files on the NFS file system. Why are OpenOwners so= =0A= > > > > > high? Killing most of my desktop processes doesn't seem to make = a=0A= > > > > > difference. Restarting X does improve the perceived responsivene= ss,=0A= > > > > > though it does not change the number of OpenOwners.=0A= > > > > >=0A= > > > > > How can I figure out which process(es) are responsible for the=0A= > > > > > excessive OpenOwners?=0A= > > > > An OpenOwner represents a process on the client. The OpenOwner=0A= > > > > name is an encoding of pid + process startup time.=0A= > > > > However, I can't think of an easy way to get at the OpenOwner name.= =0A= > > > >=0A= > > > > Now, why aren't they going away, hmm..=0A= > > > >=0A= > > > > I'm assuming the # of Opens is not large?=0A= > > > > (Openowners cannot go away until all associated opens=0A= > > > > are closed.)=0A= > > >=0A= > > > Oh, I didn't mention that yes the number of Opens is large. Right=0A= > > > now, for example, I have 7950 OpenOwner and 8277 Open.=0A= > > Well, the openowners cannot go away until the opens go away,=0A= > > so the problem is that the opens are not getting closed.=0A= > >=0A= > > Close happens when the v_usecount on the vnode goes to zero.=0A= > > Something is retaining the v_usecount. One possibility is that most=0A= > > of the opens are for the same file, but with different openowners.=0A= > > If that is the case, the "oneopenown" mount option will deal with it.= =0A= > >=0A= > > Another possibility is that something is retaining a v_usecount=0A= > > reference on a lot of the vnodes. (This used to happen when a nullfs=0A= > > mount with caching enabled was on top of the nfs mount.)=0A= > > I don't know what other things might do that?=0A= >=0A= > Yeah, I remember the nullfs problem. But I'm not using nullfs on this=0A= > computer anymore. Is there any debugging facility that can list=0A= > vnodes? All I know of is "fstat", and that doesn't show anywhere near=0A= > the number of NFS Opens.=0A= Don't ask me. My debugging technology consists of printf()s.=0A= =0A= An NFSv4 Open is for a . It is probably opening the same file by many different=0A= processes. The "oneopenown" option makes the client use the same=0A= openowner for all opens, so that there is one open per file.=0A= =0A= > >=0A= > > > >=0A= > > > > Commit 1cedb4ea1a79 in main changed the semantics of this=0A= > > > > a little, to avoid a use-after-free bug. However, it is dated=0A= > > > > Feb. 25, 2022 and is not in 13.0, so I don't think it could=0A= > > > > be the culprit.=0A= > > > >=0A= > > > > Essentially, the function called nfscl_cleanupkext() should call=0A= > > > > nfscl_procdoesntexist(), which returns true after the process has= =0A= > > > > exited and when that is the case, calls nfscl_cleanup_common().=0A= > > > > --> nfscl_cleanup_common() will either get rid of the openowner or,= =0A= > > > > if there are still children with open file descriptors, mark = it "defunct"=0A= > > > > so it can be free'd once the children close the file.=0A= > > > >=0A= > > > > It could be that X is now somehow creating a long chain of processe= s=0A= > > > > where the children inherit a file descriptor and that delays the cl= eanup=0A= > > > > indefinitely?=0A= > > > > Even then, everything should get cleaned up once you kill off X?=0A= > > > > (It might take a couple of seconds after killing all the processes = off.)=0A= > > > >=0A= > > > > Another possibility is that the "nfscl" thread is wedged somehow.= =0A= > > > > It is the one that will call nfscl_cleanupkext() once/sec. If it ne= ver=0A= > > > > gets called, the openowners will never go away.=0A= > > > >=0A= > > > > Being old fashioned, I'd probably try to figure this out by adding= =0A= > > > > some printf()s to nfscl_cleanupkext() and nfscl_cleanup_common().= =0A= > > >=0A= > > > dtrace shows that nfscl_cleanupkext() is getting called at about 0.6 = hz.=0A= > > That sounds ok. Since there are a lot of opens/openowners, it probably= =0A= > > is getting behind.=0A= > >=0A= > > > >=0A= > > > > To avoid the problem, you can probably just use the "oneopenown"=0A= > > > > mount option. With that option, only one openowner is used for=0A= > > > > all opens. (Having separate openowners for each process was needed= =0A= > > > > for NFSv4.0, but not NFSv4.1/4.2.)=0A= > > > >=0A= > > > > > Or is it just a red herring and I shouldn't=0A= > > > > > worry?=0A= > > > > Well, you can probably avoid the problem by using the "oneopenown"= =0A= > > > > mount option.=0A= > > >=0A= > > > Ok, I'm trying that now. After unmounting and remounting NFS,=0A= > > > "nfsstat -cE" reports 1 OpenOwner and 11 Opens". But on the server,= =0A= > > > "nfsdumpstate" still reports thousands. Will those go away=0A= > > > eventually?=0A= > > If the opens are gone then, yes, they will go away. They are retained f= or=0A= > > a little while so that another Open against the openowner does not need= =0A= > > to recreate the openowner (which also implied an extra RPC to confirm= =0A= > > the openowner in NFSv4.0).=0A= > >=0A= > > I think they go away after a few minutes, if I recall correctly.=0A= > > If the server thinks there are still Opens, then they will not go away.= =0A= > =0A= > Uh, they aren't going away. It's been a few hours now, and the NFS=0A= > server still reports the same number of opens and openowners.=0A= Yes, the openowners won't go away until the opens go away and the=0A= opens don't go away until the client closes them. (Once the opens are=0A= closed, the openowners go away after something like 5minutes.)=0A= =0A= For NFSv4.0, the unmount does a SetclientID/SetclientIDconfirm, which=0A= gets rid of all opens at the server. However, NFSv4.1/4.2 does not have=0A= this. It has a DestroyClient, but it is required to return NFSERR_CLIENTBUS= Y=0A= if there are outstanding opens (servers are not supposed to "forget" opens,= =0A= except when they crash. Even then, if they have something like non-volatile= =0A= ram, they can remember opens through a reboot. (FreeBSD does forget them=0A= upon reboot.)=0A= Maybe for 4.1/4.2 the client should try and close any outstanding opens.=0A= (Normally, they should all be closed once all files are POSIX closed. I=0A= suspect that it didn't happen because the "nfscl" thread was killed off=0A= during unmount before it got around to doing all of them.)=0A= I'll look at this.=0A= =0A= How to get rid of them now...=0A= - I think a nfsrevoke(8) on the clientid will do so. However, if the same= =0A= clientid is in use for your current mount, you'll need to unmount before= =0A= doing so.=0A= =0A= Otherwise, I think they'll be there until a server reboot (or kldunload/kld= load=0A= of the nfsd, if it is not built into the kernel. Even a restart of the nfsd= daemon=0A= does not get rid of them, since the "server should never forget opens" rule= =0A= is applied.=0A= =0A= rick=0A= =0A= >=0A= > rick=0A= >=0A= > >=0A= > > Thanks for reporting this, rick=0A= > > ps: And, yes, large numbers of openowners will slow things down,=0A= > > since the code ends up doing linear scans of them all in a linked= =0A= > > list in various places.=0A= > >=0A= > > -Alan=0A= > >=0A=