From nobody Tue Mar 3 20:34:14 2026 X-Original-To: freebsd-current@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4fQSFF74wVz6TbXH for ; Tue, 03 Mar 2026 20:33:53 +0000 (UTC) (envelope-from ambrisko@ambrisko.com) Received: from aws.ambrisko.com (aws.ambrisko.com [100.20.204.14]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "ambrisko.com", Issuer "R12" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4fQSFF1Mw6z3h4n; Tue, 03 Mar 2026 20:33:53 +0000 (UTC) (envelope-from ambrisko@ambrisko.com) Authentication-Results: mx1.freebsd.org; dkim=pass header.d=ambrisko.com header.s=default header.b=cBXVKnhh; dmarc=pass (policy=reject) header.from=ambrisko.com; spf=pass (mx1.freebsd.org: domain of ambrisko@ambrisko.com designates 100.20.204.14 as permitted sender) smtp.mailfrom=ambrisko@ambrisko.com Received: from ambrisko.com (localhost [127.0.0.1]) by aws.ambrisko.com (8.18.1/8.18.1) with ESMTPS id 623KYGo8079797 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO); Tue, 3 Mar 2026 12:34:17 -0800 (PST) (envelope-from ambrisko@ambrisko.com) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=ambrisko.com; s=default; t=1772570057; bh=F73gys1MFVjpGonuroIg393tmy1Hf/a1BwkaXynA548=; h=Date:From:To:Cc:Subject:References:In-Reply-To; b=cBXVKnhht0rXEpvK3luIszNtJGxTUZ91blcwM68DUzll/T0m1/V1QnxOV0dMqqNlT b3NAuayWXtWCLEjvax9/51iVtoGYGcJECpYMf6mlZQdHJK6KOInym/DS4cAsJv5mGz UHyVqdJF425rnopCFYQFicbZ/I+PCK2tkX8GlT5c= X-Authentication-Warning: aws.ambrisko.com: Host localhost [127.0.0.1] claimed to be ambrisko.com Received: (from ambrisko@localhost) by ambrisko.com (8.18.1/8.18.1/Submit) id 623KYECf079796; Tue, 3 Mar 2026 12:34:14 -0800 (PST) (envelope-from ambrisko) Date: Tue, 3 Mar 2026 12:34:14 -0800 From: Doug Ambrisko To: Alexander Leidinger Cc: Peter Eriksson , Rick Macklem , FreeBSD CURRENT , Garrett Wollman , Alexander Motin Subject: Re: RFC: How ZFS handles arc memory use Message-ID: References: <22b478c6bad8212c61ca19a983a8e2e4@Leidinger.net> List-Id: Discussions about the use of FreeBSD-current List-Archive: https://lists.freebsd.org/archives/freebsd-current List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-current@FreeBSD.org MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <22b478c6bad8212c61ca19a983a8e2e4@Leidinger.net> X-Spamd-Result: default: False [-2.50 / 15.00]; SUSPICIOUS_RECIPS(1.50)[]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; NEURAL_HAM_SHORT(-1.00)[-1.000]; NEURAL_HAM_LONG(-1.00)[-0.999]; DMARC_POLICY_ALLOW(-0.50)[ambrisko.com,reject]; R_SPF_ALLOW(-0.20)[+ip4:100.20.204.14]; R_DKIM_ALLOW(-0.20)[ambrisko.com:s=default]; MIME_GOOD(-0.10)[text/plain]; DKIM_TRACE(0.00)[ambrisko.com:+]; ARC_NA(0.00)[]; FREEMAIL_CC(0.00)[lysator.liu.se,gmail.com,freebsd.org,bimajority.org]; TO_DN_ALL(0.00)[]; MISSING_XM_UA(0.00)[]; RCVD_TLS_LAST(0.00)[]; HAS_XAW(0.00)[]; ASN(0.00)[asn:16509, ipnet:100.20.0.0/14, country:US]; MIME_TRACE(0.00)[0:+]; TO_MATCH_ENVRCPT_SOME(0.00)[]; FROM_HAS_DN(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; FREEFALL_USER(0.00)[ambrisko]; RCVD_COUNT_TWO(0.00)[2]; MID_RHS_MATCH_FROM(0.00)[]; MLMMJ_DEST(0.00)[freebsd-current@freebsd.org]; TAGGED_RCPT(0.00)[]; RCPT_COUNT_FIVE(0.00)[6] X-Rspamd-Queue-Id: 4fQSFF1Mw6z3h4n X-Spamd-Bar: -- On Sun, Nov 02, 2025 at 11:48:06AM +0100, Alexander Leidinger wrote: | Am 2025-10-29 22:06, schrieb Doug Ambrisko: | > It seems around the switch to OpenZFS I would have arc clean task | > running | > 100% on a core. I use nullfs on my laptop to map my shared ZFS /data | > partiton into a few vnet instances. Over night or so I would get into | > this issue. I found that I had a bunch of vnodes being held by other | > layers. My solution was to reduce kern.maxvnodes and vfs.zfs.arc.max so | > the ARC cache stayed reasonable without killing other applications. | > | > That is why a while back I added the vnode count to mount -v so that | > I could see the usage of vnodes for each mount point. I made a script | > to report on things: | | Do you see this also with the nullfs mount option "nocache"? I seems to have run into this issue with nocache /data/jail/current/usr/local/etc/cups /data/jail/current-other/usr/local/etc/cups nullfs rw,nocache 0 0 /data/jail/current/usr/local/etc/sane.d /data/jail/current-other/usr/local/etc/sane.d nullfs rw,nocache 0 0 /data/jail/current/usr/local/www /data/jail/current-other/usr/local/www nullfs rw,nocache 0 0 /data/jail/current/usr/local/etc/nginx /data/jail/current-other/usr/local/etc/nginx nullfs rw,nocache 0 0 /data/jail/current/tftpboot /data/jail/current-other/tftpboot nullfs rw,nocache 0 0 /data/jail/current/usr/local/lib/grub /data/jail/current-other/usr/local/lib/grub nullfs rw,nocache 0 0 /data/jail /data/jail/current-other/data/jail nullfs rw,nocache 0 0 /data/jail /data/jail/current/data/jail nullfs rw,nocache 0 0 After a while (a couple of months or more). My laptop was running slow with a high load. The perodic find was running slow. arc_prunee was spinning. When I reduced the number of vnodes then things got better. My vfs.zfs.arc_max is 1073741824 so that I have memory for other things. nocache does help taking longer to get into this situation. Thanks, Doug A.