Date: Mon, 16 Mar 2026 16:18:38 -0600 From: Alan Somers <asomers@freebsd.org> To: Konstantin Belousov <kostikbel@gmail.com> Cc: Garrett Wollman <wollman@bimajority.org>, freebsd-stable@freebsd.org Subject: Re: ZFS deadlocks/memory accounting issues Message-ID: <CAOtMX2gjByAqxDVX3ByOzr770eHO9hfXFHQJmyLV_Esgh6JNvw@mail.gmail.com> In-Reply-To: <abh8p9YG1qGVmTR_@kib.kiev.ua> References: <27064.27391.224476.910636@hergotha.csail.mit.edu> <CAOtMX2gpNQH9hpCnRP%2Bm5kBJDMf5O3MSHgzPVjBKEObyL8bjdw@mail.gmail.com> <abh8p9YG1qGVmTR_@kib.kiev.ua>
index | next in thread | previous in thread | raw e-mail
On Mon, Mar 16, 2026 at 3:57 PM Konstantin Belousov <kostikbel@gmail.com> wrote: > > On Mon, Mar 16, 2026 at 03:08:44PM -0600, Alan Somers wrote: > > I once saw a similar bug. In my case I had a process that mmap()ed > > some very large files on fusefs, consuming lots of inactive pages. > > And when the system comes under memory pressure, it asks ARC to evict > > first. So the ARC would end up shrinking down to arc_min every time. > > In my case, the solution was to set vfs.fusefs.data_cache_mode=0 . I > > suspect that similar bugs could be possible with UFS or tmpfs, if they > > have giant files that are mmaped(). > > What are 'similar bugs with UFS or tmpfs'? > Can you please be more specific, what is the erronous behavior? I experienced this bug in 2021, and reproduced it on both FreeBSD 12.2 and 13.0. The setup was: * A ZFS-root server with hundreds of GB of RAM and hundreds of TB of ZFS, with a complicated ZFS workload. * A custom fusefs file system. Each fusefs mountpoint presented a small number of files, some huge, and was backed by a file on ZFS itself. * A ctld target for each fusefs mountpoint, backed by one file on that mountpoint. "vmstat -o" showed that each of those ctld targets consume a huge amount of inactive memory. Basically, ctld was mmaping the whole file and never releasing any pages. The dtrace sdt:zfs:none:arc-needfree probe showed that the page daemon was frequently asking ZFS to free memory from ARC. ZFS complied, and the ARC size would slowly shrink down to vfs.zfs.arc_min . In my case, there was no crash, and the OOM killer wasn't involved, but performance suffered. Setting vfs.fusefs.data_cache_mode=0 was a perfect workaround for us, so I never investigated further. When I say that I suspect similar bugs may exist with UFS or tmpfs, I'm suspecting that if ctld exports huge files from those file systems on a mixed UFS/ZFS system, then they might consume huge amounts of inactive pages. But I've never checked. -Alanhome | help
Want to link to this message? Use this
URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAOtMX2gjByAqxDVX3ByOzr770eHO9hfXFHQJmyLV_Esgh6JNvw>
