From nobody Thu May 5 22:22:18 2022 X-Original-To: stable@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id B62711ACD618 for ; Thu, 5 May 2022 22:22:27 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from CAN01-QB1-obe.outbound.protection.outlook.com (mail-qb1can01on2045.outbound.protection.outlook.com [40.107.66.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mail.protection.outlook.com", Issuer "DigiCert Cloud Services CA-1" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4KvSqk3drPz3pt0; Thu, 5 May 2022 22:22:26 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=hNwBFDm5zMt/mGihHYjvMdz1gl4lPuav+ektf9oc1voJBmhmrjT0P6rx2NkxjQnGBYz/+P5rzUOR0CrKk/ebIaKdxjONdLHigjXfuhus3k+HDCX5psli1WbCAcnnZPPf9L1Wq2fifMn5U4pllWzVstbZWTf4yainLsAcBUcJikkYL3GMwpwuI5Nm7l9HurCacJB4vIzOBnumcK8bJLjVC3aDxxBOSHYj/1rsqMtIoA6Zx69sfgx0ILZI8R0YDCAFd3Fhtt2aDn9z3rDn2nWwvKDQp6ZhJ+EiEAAIucXJmtvLqbZVe0gGhR8D1szBFoV1/mmKpxp4Ed+ttaP3ioUUKg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=h4evt96YunRjMu9i+0CQYpjOUpnHuYLzB0B5JtuHuWg=; b=ftAZvO88VbTKBaXio6rOtWT/xQkxdkt5HlcsaLU4KKLP6JQ0X7DO3dFsrm2wDaNIt0QHAwNH3RmK15X/d+C4yUYfQ0Fpq4b9hCddGC4ZEP13sl6YfSO/2vou5XBO7Ql94r3pHFwUj7XfQ+BCeCxa8PDEGBHCO05v+FcPTDEElrX4dPtQa2rgy6DG+UORV31JuR7mG2KHe91SSPIf945FEnsAhJ+Bntc5xop2XtvIZmyJpjV76GDryasrGfF/Uf0ni05TMtrak3VmNTsnqT/MDNjId4yUSL+w+BtscU3oizv8XushOL9L3Yt+BoiK/N5t8E1PzWT9h7LIyzFtrnhP5g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=uoguelph.ca; dmarc=pass action=none header.from=uoguelph.ca; dkim=pass header.d=uoguelph.ca; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=uoguelph.ca; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=h4evt96YunRjMu9i+0CQYpjOUpnHuYLzB0B5JtuHuWg=; b=IWIop1qjpX+crwI+mLqiaDjm3w5IBKXUdHNmxXZQnl6/bTqKFAQLxYHZS5ilpg7rbayXvfFzUvxGDIl5kIYFw+M36i/GtXwlyyywZP6gpxgm/LIm4msE6Q9SGfWB+kt0HDmQLU+xzvCZr3zzKt/7+MWLRnsSx1tpn3Z4wCtM9pSKVV02Gfp+8TFnqqqJWcZy68sr7OTdofMIPscz0Qv77ZNI1dugFKgPgSLZvJDq6GZf+KXY/75OBHxO55kMxqZCD8KDGtd5Cwh3Pogn3Nvo/RH5uMTQeWJi9XAW3Ucr8Dl74R2ziPPUXTI6Zs2oTaBsixmQy7hY9+O2beq1vNAlnA== Received: from YT2PR01MB9730.CANPRD01.PROD.OUTLOOK.COM (2603:10b6:b01:de::14) by YQXPR01MB3800.CANPRD01.PROD.OUTLOOK.COM (2603:10b6:c00:4d::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5206.13; Thu, 5 May 2022 22:22:19 +0000 Received: from YT2PR01MB9730.CANPRD01.PROD.OUTLOOK.COM ([fe80::fdb1:ada8:7af0:3003]) by YT2PR01MB9730.CANPRD01.PROD.OUTLOOK.COM ([fe80::fdb1:ada8:7af0:3003%6]) with mapi id 15.20.5206.027; Thu, 5 May 2022 22:22:19 +0000 From: Rick Macklem To: Alan Somers CC: FreeBSD Stable ML Subject: Re: nfs client's OpenOwner count increases without bounds Thread-Topic: nfs client's OpenOwner count increases without bounds Thread-Index: AQHYX/oEvZae9laYBUaMKYABkmfK960PSuzXgAAZa4CAAA4SAoAAI6WAgADCKiKAAA/WAIAAbaJf Date: Thu, 5 May 2022 22:22:18 +0000 Message-ID: References: In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: suggested_attachment_session_id: 2b8e5600-bd1a-3940-287b-507804a59545 x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 6474b351-2805-46ad-4ed9-08da2ee5b7af x-ms-traffictypediagnostic: YQXPR01MB3800:EE_ x-microsoft-antispam-prvs: x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: +mPgJbdy/ndNH+rdYhJF4Jie1IGWIhDMgiM01SDN9lSBAyC6EXZ5ZK0Te+UFY3yzhAtbc99eVp9jgjJGXtTRoz+SFxFL2xm42J9ieGzEQYi35OlAvAIKZalbE20ep3VyPHMyFLs9zkiuqRHnmGggtfJD+uIkGco96qoCG9AdQ7VzH7XZVr1YBmHhTcGwAsKtHaJP8j52fj8D+oqceraxD6ReCtpXhQHojJAE0IyzX98Vg0yPX1gVZkY+LJBx4fe+la/ZvHmUSRATvSu9YNgST89twcFuDDVx46DaPZnUvqbkjsCzXk2zpJH9cU+MQHvha8+u3oe20CUudJuI+5oiQ7fnf+xPq9kywChAkswnAoZ3RFQS/e7urQCXj8CxpOF6yCIUQXHhWEyHSywz/uiIAmlzseWYMYb6vMX9Wgw4RYFOC+ufZpzKLTn5oJ1ybmmeRvnGxBYrRMOlGDdzcoKYxnw0N+omcByItbBPZw5qKopKyZFExnFxZMRhZssjCFS9MZY64sVMbgwMKPW66YReo9EUGisTSNJm9hg4u8024OlJxlZ3PH8PnxCJTk0VnI1x6ed/8Vxv7+lg105YE53wzdvDGhEmiXsGIS9chekKibws3mh5FbSFp6pQ4X9PtvoSGMhz7xk2w6rco+kYLoBaC5bXwFCeO0uCM6KyhkLhn4OGvci5TTgLHS/EOownaPjVK4AMB0DxsKPmaUDFF5Yzbw== x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:YT2PR01MB9730.CANPRD01.PROD.OUTLOOK.COM;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(122000001)(6916009)(8936002)(52536014)(508600001)(7696005)(2906002)(5660300002)(86362001)(83380400001)(33656002)(786003)(53546011)(6506007)(316002)(71200400001)(66476007)(64756008)(38070700005)(38100700002)(4326008)(450100002)(8676002)(76116006)(91956017)(66946007)(66446008)(66556008)(186003)(55016003)(9686003);DIR:OUT;SFP:1101; x-ms-exchange-antispam-messagedata-chunkcount: 2 x-ms-exchange-antispam-messagedata-0: =?iso-8859-1?Q?Mp01/onGkm73x1e3dLX2GzfzvT3F3qs3Thl3xdjAId7c0xqM44lrFotaj3?= =?iso-8859-1?Q?bB1CxdPJFOb8G75q4yMwD6551xq7oa+rmkuzsT1jLgAJWMdo8gi9wUhOk2?= =?iso-8859-1?Q?olJMmyEqB6dXlSZJeFab/B/ZVKSDqHWjmAhkjX8CeBp1InZFNJ6KiVI/AU?= =?iso-8859-1?Q?QOKCG1JYUYWTh+gBG1axxV2HRSRQv0xaAC+LUn5XmhgYHpZw1pxsL8sxDn?= =?iso-8859-1?Q?oPaMU1rbY1Mj977j4Ks0qSULf1fEbFpTkfMooVsmGTC8lUZ8/cYwdyOoXz?= =?iso-8859-1?Q?g/PjtQv8IaE3B8c5y/VSAzqv5/qXe6oxyxNwAevIcdrL/lsFEkx7KR8YxT?= =?iso-8859-1?Q?ILy3UG0aVbbCCD69vBmiovTqeBePF76zQ2t4/qfX6f9mplGKtumeoWTPHB?= =?iso-8859-1?Q?KerDetjncJlzdkA5BF162RtW1s5pSkWbutM2jw3a7OleuDek7MW2YNXD7h?= =?iso-8859-1?Q?zY7wwnl5mk6MvnaHUTUCYXwGkw983VGpAEpTLtPuhPDaeKnDbeCeksaQXV?= =?iso-8859-1?Q?1sk/ICSbioKfqupj3sCZS1BNuv01ZtUz1JRfOWlzcg1aDN6eYlDeLAisbd?= =?iso-8859-1?Q?fhF7szT6Z38vcXbm0d9HOQWB0NGo2tkREW7ZwrSqtXNWJPt5ba6dvuGh39?= =?iso-8859-1?Q?uS2dvhLt4izs7kpLK75+mdia+Lh/6tX6InbuYMsc9cqRmAg4zrDkTvKTOF?= =?iso-8859-1?Q?xge8fsQGJla/WyIPG6W/ucdVyf2YokfUKDO2IlCO+JwaqzBd+yIOlg5Jbz?= =?iso-8859-1?Q?/fBVOxIfd4aJFsFHZzd7Jgx1np1u9BSDe/lNM65d4TEfRsiZtrjw7lZm6c?= =?iso-8859-1?Q?CYEUwVfN74TdT/dHODvKEzEXjuKwYHKXvdiOUQYw57k2WJ51nHYYroaz07?= =?iso-8859-1?Q?IYeXtJwMtD+rmiGLt/0RZcayrTMV06qqfmhRPs6PPxnUUC01fv7YrWVIsZ?= =?iso-8859-1?Q?8XmjIwqJMCFt7dd/vcAzkEoMgK4H9oiD68h7JoRWPJaOrbFJTFD7pRq/N/?= =?iso-8859-1?Q?sUPtEhjG9zllJ2kbJUdkpQqG225+O3u6dj/rgN5aZ4q3W3spDnGt5LbvUU?= =?iso-8859-1?Q?NDWUYWVWz7f4e93Nv+slhNRpJ/RkmqHyhCXtim+2dfO24b81pijSOpDvLN?= =?iso-8859-1?Q?5EIduHND2NTtPrDvf5qNYvulAYalvSbQ9OclauMwJQvG7KZV6n+oQaQiYO?= =?iso-8859-1?Q?YxD+hioBThSZ2ByUAWiAz6UFZpr6g3VXo6csutLEXbKgdwmMvN9pz7qmzy?= =?iso-8859-1?Q?TKGF27KDmSj7HDxO3xOIVwLnoV3Ut8F86eYjQbybMcol18nFQvi8jM5DFy?= =?iso-8859-1?Q?2r7JKmRA9sWaQi3DaMO3/hPAqAQkc9FTmdooozxiwv9uki68b9vg0LCAJ5?= =?iso-8859-1?Q?g9eInW0kSdfubaqLPfkGyq89qIYMgZV5HrCEjUBP8vRNKWuaBsAsRLeFUe?= =?iso-8859-1?Q?4AOLGuXEc58xOpstgs7zDIsxTsLatFp2BYoFqKXSCG55d9fVYB3drIVJBX?= =?iso-8859-1?Q?uwmaSVIHhs89nGZ+sMttdHXFU1/mn8T3qZ0b2H2XEYh/8av05Un+pD+7td?= =?iso-8859-1?Q?LfwI6tuUgkCz1nlrz7ZLSnraEDfSaVkqKI4uoxooYdxoZvfZuGmicP3J4I?= =?iso-8859-1?Q?OcZWEFm/szydjKxVb1R1cPSMVtLt8+GA6v0uJc9qBnRsvFuVNIQhgzcKFP?= =?iso-8859-1?Q?8sSUt6i5I/qceWhAYUmjPDg6wP8rREoHJ5siDY8YuDhVgUKfbm48X/Lttr?= =?iso-8859-1?Q?/wNRSOnr4HbUuydwOE3UZr2iFNzKyDX9QgYQ+/QBus9Q2EzwYN4fqYAqAc?= =?iso-8859-1?Q?opFc/ryGPYsezPvopgDheufnMpx7OGiJUOtldKo96j5yrQXDZvX4upJNJB?= =?iso-8859-1?Q?LN?= x-ms-exchange-antispam-messagedata-1: e/c3tjaWbeCM/Q== Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable List-Id: Production branch of FreeBSD source code List-Archive: https://lists.freebsd.org/archives/freebsd-stable List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-stable@freebsd.org X-BeenThere: freebsd-stable@freebsd.org MIME-Version: 1.0 X-OriginatorOrg: uoguelph.ca X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: YT2PR01MB9730.CANPRD01.PROD.OUTLOOK.COM X-MS-Exchange-CrossTenant-Network-Message-Id: 6474b351-2805-46ad-4ed9-08da2ee5b7af X-MS-Exchange-CrossTenant-originalarrivaltime: 05 May 2022 22:22:18.9985 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: be62a12b-2cad-49a1-a5fa-85f4f3156a7d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: o2XMApTPZyFhRZTSSseEOpPT+KlqRow6pWQM6qLIHI2xI9AHdqOR13Ov68I+uraKtWqbhDb3/+Z2HOGktQx31Q== X-MS-Exchange-Transport-CrossTenantHeadersStamped: YQXPR01MB3800 X-Rspamd-Queue-Id: 4KvSqk3drPz3pt0 X-Spamd-Bar: ----- Authentication-Results: mx1.freebsd.org; dkim=pass header.d=uoguelph.ca header.s=selector2 header.b=IWIop1qj; arc=pass ("microsoft.com:s=arcselector9901:i=1"); dmarc=pass (policy=none) header.from=uoguelph.ca; spf=pass (mx1.freebsd.org: domain of rmacklem@uoguelph.ca designates 40.107.66.45 as permitted sender) smtp.mailfrom=rmacklem@uoguelph.ca X-Spamd-Result: default: False [-6.00 / 15.00]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; R_DKIM_ALLOW(-0.20)[uoguelph.ca:s=selector2]; FREEFALL_USER(0.00)[rmacklem]; FROM_HAS_DN(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; R_SPF_ALLOW(-0.20)[+ip4:40.107.0.0/16]; MIME_GOOD(-0.10)[text/plain]; NEURAL_HAM_LONG(-1.00)[-1.000]; DWL_DNSWL_LOW(-1.00)[uoguelph.ca:dkim]; RCVD_COUNT_THREE(0.00)[3]; TO_DN_ALL(0.00)[]; DKIM_TRACE(0.00)[uoguelph.ca:+]; RCPT_COUNT_TWO(0.00)[2]; RCVD_IN_DNSWL_NONE(0.00)[40.107.66.45:from]; NEURAL_HAM_SHORT(-1.00)[-1.000]; DMARC_POLICY_ALLOW(-0.50)[uoguelph.ca,none]; MLMMJ_DEST(0.00)[stable]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+]; RCVD_TLS_LAST(0.00)[]; ASN(0.00)[asn:8075, ipnet:40.104.0.0/14, country:US]; ARC_ALLOW(-1.00)[microsoft.com:s=arcselector9901:i=1]; RWL_MAILSPIKE_POSSIBLE(0.00)[40.107.66.45:from] X-ThisMailContainsUnwantedMimeParts: N Alan Somers wrote:=0A= > On Thu, May 5, 2022 at 8:49 AM Rick Macklem wrote:= =0A= > >=0A= > > Alan Somers wrote:=0A= > > > On Wed, May 4, 2022 at 6:56 PM Rick Macklem wr= ote:=0A= > > > >=0A= > > > > Alan Somers wrote:=0A= > > > > > On Wed, May 4, 2022 at 5:23 PM Rick Macklem wrote:=0A= > > > > > >=0A= > > > > > > Alan Somers wrote:=0A= > > > > > > > I have a FreeBSD 13 (tested on both 13.0-RELEASE and 13.1-RC5= ) desktop=0A= > > > > > > > mounting /usr/home over NFS 4.2 from an 13.0-RELEASE server. = It=0A= > > > > > > > worked fine until a few weeks ago. Now, the desktop's perfor= mance=0A= > > > > > > > slowly degrades. It becomes less and less responsive until I= restart=0A= > > > > > > > X after 2-3 days. /var/log/Xorg.0.log shows plenty of entrie= s like=0A= > > > > > > > "AT keyboard: client bug: event processing lagging behind by = 112ms,=0A= > > > > > > > your system is too slow". "top -S" shows that the busiest pr= ocess is=0A= > > > > > > > nfscl. A dtrace profile shows that nfscl is spending most of= its time=0A= > > > > > > > in nfscl_cleanup_common, in the loop over all nfsclowner obje= cts.=0A= > > > > > > > Running "nfsdumpstate" on the server shows thousands of OpenO= wners for=0A= > > > > > > > that client, and < 10 for any other NFS client. The OpenOwne= rs=0A= > > > > > > > increases by about 3000 per day. And yet, "fstat" shows only= a couple=0A= > > > > > > > hundred open files on the NFS file system. Why are OpenOwner= s so=0A= > > > > > > > high? Killing most of my desktop processes doesn't seem to m= ake a=0A= > > > > > > > difference. Restarting X does improve the perceived responsi= veness,=0A= > > > > > > > though it does not change the number of OpenOwners.=0A= > > > > > > >=0A= > > > > > > > How can I figure out which process(es) are responsible for th= e=0A= > > > > > > > excessive OpenOwners?=0A= > > > > > > An OpenOwner represents a process on the client. The OpenOwner= =0A= > > > > > > name is an encoding of pid + process startup time.=0A= > > > > > > However, I can't think of an easy way to get at the OpenOwner n= ame.=0A= > > > > > >=0A= > > > > > > Now, why aren't they going away, hmm..=0A= > > > > > >=0A= > > > > > > I'm assuming the # of Opens is not large?=0A= > > > > > > (Openowners cannot go away until all associated opens=0A= > > > > > > are closed.)=0A= > > > > >=0A= > > > > > Oh, I didn't mention that yes the number of Opens is large. Righ= t=0A= > > > > > now, for example, I have 7950 OpenOwner and 8277 Open.=0A= > > > > Well, the openowners cannot go away until the opens go away,=0A= > > > > so the problem is that the opens are not getting closed.=0A= > > > >=0A= > > > > Close happens when the v_usecount on the vnode goes to zero.=0A= > > > > Something is retaining the v_usecount. One possibility is that most= =0A= > > > > of the opens are for the same file, but with different openowners.= =0A= > > > > If that is the case, the "oneopenown" mount option will deal with i= t.=0A= > > > >=0A= > > > > Another possibility is that something is retaining a v_usecount=0A= > > > > reference on a lot of the vnodes. (This used to happen when a nullf= s=0A= > > > > mount with caching enabled was on top of the nfs mount.)=0A= > > > > I don't know what other things might do that?=0A= > > >=0A= > > > Yeah, I remember the nullfs problem. But I'm not using nullfs on thi= s=0A= > > > computer anymore. Is there any debugging facility that can list=0A= > > > vnodes? All I know of is "fstat", and that doesn't show anywhere nea= r=0A= > > > the number of NFS Opens.=0A= > > Don't ask me. My debugging technology consists of printf()s.=0A= > >=0A= > > An NFSv4 Open is for a > client), file>. It is probably opening the same file by many different= =0A= > > processes. The "oneopenown" option makes the client use the same=0A= > > openowner for all opens, so that there is one open per file.=0A= > >=0A= > > > >=0A= > > > > > >=0A= > > > > > > Commit 1cedb4ea1a79 in main changed the semantics of this=0A= > > > > > > a little, to avoid a use-after-free bug. However, it is dated= =0A= > > > > > > Feb. 25, 2022 and is not in 13.0, so I don't think it could=0A= > > > > > > be the culprit.=0A= > > > > > >=0A= > > > > > > Essentially, the function called nfscl_cleanupkext() should cal= l=0A= > > > > > > nfscl_procdoesntexist(), which returns true after the process h= as=0A= > > > > > > exited and when that is the case, calls nfscl_cleanup_common().= =0A= > > > > > > --> nfscl_cleanup_common() will either get rid of the openowner= or,=0A= > > > > > > if there are still children with open file descriptors, m= ark it "defunct"=0A= > > > > > > so it can be free'd once the children close the file.=0A= > > > > > >=0A= > > > > > > It could be that X is now somehow creating a long chain of proc= esses=0A= > > > > > > where the children inherit a file descriptor and that delays th= e cleanup=0A= > > > > > > indefinitely?=0A= > > > > > > Even then, everything should get cleaned up once you kill off X= ?=0A= > > > > > > (It might take a couple of seconds after killing all the proces= ses off.)=0A= > > > > > >=0A= > > > > > > Another possibility is that the "nfscl" thread is wedged someho= w.=0A= > > > > > > It is the one that will call nfscl_cleanupkext() once/sec. If i= t never=0A= > > > > > > gets called, the openowners will never go away.=0A= > > > > > >=0A= > > > > > > Being old fashioned, I'd probably try to figure this out by add= ing=0A= > > > > > > some printf()s to nfscl_cleanupkext() and nfscl_cleanup_common(= ).=0A= > > > > >=0A= > > > > > dtrace shows that nfscl_cleanupkext() is getting called at about = 0.6 hz.=0A= > > > > That sounds ok. Since there are a lot of opens/openowners, it proba= bly=0A= > > > > is getting behind.=0A= > > > >=0A= > > > > > >=0A= > > > > > > To avoid the problem, you can probably just use the "oneopenown= "=0A= > > > > > > mount option. With that option, only one openowner is used for= =0A= > > > > > > all opens. (Having separate openowners for each process was nee= ded=0A= > > > > > > for NFSv4.0, but not NFSv4.1/4.2.)=0A= > > > > > >=0A= > > > > > > > Or is it just a red herring and I shouldn't=0A= > > > > > > > worry?=0A= > > > > > > Well, you can probably avoid the problem by using the "oneopeno= wn"=0A= > > > > > > mount option.=0A= > > > > >=0A= > > > > > Ok, I'm trying that now. After unmounting and remounting NFS,=0A= > > > > > "nfsstat -cE" reports 1 OpenOwner and 11 Opens". But on the serv= er,=0A= > > > > > "nfsdumpstate" still reports thousands. Will those go away=0A= > > > > > eventually?=0A= > > > > If the opens are gone then, yes, they will go away. They are retain= ed for=0A= > > > > a little while so that another Open against the openowner does not = need=0A= > > > > to recreate the openowner (which also implied an extra RPC to confi= rm=0A= > > > > the openowner in NFSv4.0).=0A= > > > >=0A= > > > > I think they go away after a few minutes, if I recall correctly.=0A= > > > > If the server thinks there are still Opens, then they will not go a= way.=0A= > > >=0A= > > > Uh, they aren't going away. It's been a few hours now, and the NFS= =0A= > > > server still reports the same number of opens and openowners.=0A= > > Yes, the openowners won't go away until the opens go away and the=0A= > > opens don't go away until the client closes them. (Once the opens are= =0A= > > closed, the openowners go away after something like 5minutes.)=0A= > >=0A= > > For NFSv4.0, the unmount does a SetclientID/SetclientIDconfirm, which= =0A= > > gets rid of all opens at the server. However, NFSv4.1/4.2 does not have= =0A= > > this. It has a DestroyClient, but it is required to return NFSERR_CLIEN= TBUSY=0A= > > if there are outstanding opens (servers are not supposed to "forget" op= ens,=0A= > > except when they crash. Even then, if they have something like non-vola= tile=0A= > > ram, they can remember opens through a reboot. (FreeBSD does forget the= m=0A= > > upon reboot.)=0A= > > Maybe for 4.1/4.2 the client should try and close any outstanding opens= .=0A= > > (Normally, they should all be closed once all files are POSIX closed. I= =0A= > > suspect that it didn't happen because the "nfscl" thread was killed of= f=0A= > > during unmount before it got around to doing all of them.)=0A= > > I'll look at this.=0A= > >=0A= > > How to get rid of them now...=0A= > > - I think a nfsrevoke(8) on the clientid will do so. However, if the sa= me=0A= > > clientid is in use for your current mount, you'll need to unmount be= fore=0A= > > doing so.=0A= > >=0A= > > Otherwise, I think they'll be there until a server reboot (or kldunload= /kldload=0A= > > of the nfsd, if it is not built into the kernel. Even a restart of the = nfsd daemon=0A= > > does not get rid of them, since the "server should never forget opens" = rule=0A= > > is applied.=0A= >=0A= > As it turns out, the excessive opens disappeared from the serve=0A= > sometime overnight. They disappeared eventually, but it took hours=0A= > rather than minutes.=0A= Heck, I just wrote the code. I have no idea what it really does;-)=0A= (Although meant to be "tongue in cheek", it is true. Blame old age or the= =0A= simple fact that this code was written in dibs and drabs over 20+ years.)= =0A= The lease would have expired, but since the FreeBSD server is what they=0A= call a "courtesy server", it does not throw away state until the lease has= =0A= expired and a conflicting lock request is made (not happening for opens=0A= from FreeBSD or Linux clients) or the server's resource limits are exceeded= .=0A= I think the resource limit would be something like 90% of 500000, which is= =0A= a lot more opens/openowners than you reported, unless other clients pushed= =0A= the number to that level overnight?=0A= =0A= There is something called NFSRV_MOULDYLEASE which gets rid of the state,=0A= but that is set to 1 week at the moment.=0A= =0A= So, why did they go away in hours? Unless you had close to 500000 opens += =0A= openowners, I haven't a clue. But it worked, so I guess we are happy?=0A= =0A= > And using oneopenowner on the client, there are now only a modest=0A= > number of opens (133), and exactly one openowner. So I think it will=0A= > certainly work for my use case.=0A= The entry for "oneopenown" in "man mount_nfs" tries to explain this.=0A= Feel free to come up with better words. I've never been good at doc.=0A= =0A= rick=0A= =0A= -Alan=0A=