Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 3 Mar 2026 14:25:11 -0800
From:      Rick Macklem <rick.macklem@gmail.com>
To:        Doug Ambrisko <ambrisko@ambrisko.com>
Cc:        Alexander Leidinger <Alexander@leidinger.net>, Peter Eriksson <pen@lysator.liu.se>,  FreeBSD CURRENT <freebsd-current@freebsd.org>, Garrett Wollman <wollman@bimajority.org>,  Alexander Motin <mav@freebsd.org>
Subject:   Re: RFC: How ZFS handles arc memory use
Message-ID:  <CAM5tNy4ji=vRhZBBo2JoargVB8vbky_TeamTTC8_i=LHR59Qkw@mail.gmail.com>
In-Reply-To: <aadFxht81oYqaz8h@ambrisko.com>
References:  <CAM5tNy5b3=04zC84Q_c60A9qssZTEY2n73okXoFPeT%2BYSK25JQ@mail.gmail.com> <F848B1F3-DE79-49D3-8D1C-1CB1BB2055E3@lysator.liu.se> <aQKB6P3HNKVNQGip@ambrisko.com> <22b478c6bad8212c61ca19a983a8e2e4@Leidinger.net> <aadFxht81oYqaz8h@ambrisko.com>

index | next in thread | previous in thread | raw e-mail

On Tue, Mar 3, 2026 at 12:33 PM Doug Ambrisko <ambrisko@ambrisko.com> wrote:
>
> On Sun, Nov 02, 2025 at 11:48:06AM +0100, Alexander Leidinger wrote:
> | Am 2025-10-29 22:06, schrieb Doug Ambrisko:
> | > It seems around the switch to OpenZFS I would have arc clean task
> | > running
> | > 100% on a core.  I use nullfs on my laptop to map my shared ZFS /data
> | > partiton into a few vnet instances.  Over night or so I would get into
> | > this issue.  I found that I had a bunch of vnodes being held by other
> | > layers.  My solution was to reduce kern.maxvnodes and vfs.zfs.arc.max so
> | > the ARC cache stayed reasonable without killing other applications.
> | >
> | > That is why a while back I added the vnode count to mount -v so that
> | > I could see the usage of vnodes for each mount point.  I made a script
> | > to report on things:
> |
> | Do you see this also with the nullfs mount option "nocache"?
>
> I seems to have run into this issue with nocache
>   /data/jail/current/usr/local/etc/cups   /data/jail/current-other/usr/local/etc/cups     nullfs rw,nocache 0 0
>   /data/jail/current/usr/local/etc/sane.d /data/jail/current-other/usr/local/etc/sane.d   nullfs rw,nocache 0 0
>   /data/jail/current/usr/local/www        /data/jail/current-other/usr/local/www          nullfs rw,nocache 0 0
>   /data/jail/current/usr/local/etc/nginx  /data/jail/current-other/usr/local/etc/nginx    nullfs rw,nocache 0 0
>   /data/jail/current/tftpboot             /data/jail/current-other/tftpboot               nullfs rw,nocache 0 0
>   /data/jail/current/usr/local/lib/grub   /data/jail/current-other/usr/local/lib/grub     nullfs rw,nocache 0 0
>   /data/jail                              /data/jail/current-other/data/jail              nullfs rw,nocache 0 0
>   /data/jail                              /data/jail/current/data/jail                    nullfs rw,nocache 0 0
>
> After a while (a couple of months or more).  My laptop was running slow
> with a high load.  The perodic find was running slow.  arc_prunee was
> spinning.  When I reduced the number of vnodes then things got better.
> My vfs.zfs.arc_max is 1073741824 so that I have memory for other things.
>
> nocache does help taking longer to get into this situation.
Have any of you guys tried increasing vfs.zfs.arc.free_target?

If I understand the code correctly, when freemem < vfs.zfs.arc.free_target
the reaper thread (the one that does uma_zone_reclaim() to return pages
to the system from the uma keg that the arc uses) should be activated.

rick

>
> Thanks,
>
> Doug A.


home | help

Want to link to this message? Use this
URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAM5tNy4ji=vRhZBBo2JoargVB8vbky_TeamTTC8_i=LHR59Qkw>