Date: Mon, 25 Apr 2022 18:44:19 +0200 From: Eirik =?ISO-8859-1?Q?=D8verby?= <ltning@anduin.net> To: freebsd-current@freebsd.org Subject: Re: nullfs and ZFS issues Message-ID: <db9ad3bfcbc9b235f4845caba0ca6d7af0f0b091.camel@anduin.net> In-Reply-To: <20220425152727.Horde.YqhquyTW0ZM3HAbI1kyskic@webmail.leidinger.net> References: <Yl31Frx6HyLVl4tE@ambrisko.com> <20220420113944.Horde.5qBL80-ikDLIWDIFVJ4VgzX@webmail.leidinger.net> <YmAy0ZNZv9Cqs7X%2B@ambrisko.com> <20220421083310.Horde.r7YT8777_AvGU_6GO1cC90G@webmail.leidinger.net> <CAGudoHEyCK4kWuJybD4jzCHbGAw46CQkPx_yrPpmRJg3m10sdQ@mail.gmail.com> <20220421154402.Horde.I6m2Om_fxqMtDMUqpiZAxtP@webmail.leidinger.net> <YmGIiwQen0Fq6lRN@ambrisko.com> <20220422090439.Horde.TabULDW9aIeaNLxngZxdvvN@webmail.leidinger.net> <20220424195817.Horde.W5ApGT13KmR06W2pKA0COxB@webmail.leidinger.net> <20220425152727.Horde.YqhquyTW0ZM3HAbI1kyskic@webmail.leidinger.net>
next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, 2022-04-25 at 15:27 +0200, Alexander Leidinger wrote: > Quoting Alexander Leidinger <Alexander@leidinger.net> (from Sun, 24 > Apr 2022 19:58:17 +0200): > > > Quoting Alexander Leidinger <Alexander@leidinger.net> (from Fri, 22 > > Apr 2022 09:04:39 +0200): > > > > > Quoting Doug Ambrisko <ambrisko@ambrisko.com> (from Thu, 21 Apr > > > 2022 09:38:35 -0700): > > > > > > I've attached mount.patch that when doing mount -v should > > > > show the vnode usage per filesystem. Note that the problem I was > > > > running into was after some operations arc_prune and arc_evict would > > > > consume 100% of 2 cores and make ZFS really slow. If you are not > > > > running into that issue then nocache etc. shouldn't be needed. > > > > > > I don't run into this issue, but I have a huge perf difference when > > > using nocache in the nightly periodic runs. 4h instead of 12-24h > > > (22 jails on this system). > > > > > > > On my laptop I set ARC to 1G since I don't use swap and in the past > > > > ARC would consume to much memory and things would die. When the > > > > nullfs holds a bunch of vnodes then ZFS couldn't release them. > > > > > > > > FYI, on my laptop with nocache and limited vnodes I haven't run > > > > into this problem. I haven't tried the patch to let ZFS free > > > > it's and nullfs vnodes on my laptop. I have only tried it via > > > > > > I have this patch and your mount patch installed now, without > > > nocache and reduced arc reclaim settings (100, 1). I will check the > > > runtime for the next 2 days. > > > > 9-10h runtime with the above settings (compared to 4h with nocache > > and 12-24h without any patch and without nocache). > > I changed the sysctls back to the defaults and will see in the next > > run (in 7h) what the result is with just the patches. > > And again 9-10h runtime (I've seen a lot of the find processes in the > periodic daily run of those 22 jails in the state "*vnode"). Seems > nocache gives the best perf for me in this case. Sorry for jumping in here - I've got a couple of questions: - Will this also apply to nullfs read-only mounts? Or is it only in case of writing "through" a nullfs mount that these problems are seen? - Is it a problem also in 13, or is this "new" in -CURRENT? We're having weird and unexplained CPU spikes on several systems, even after tuning geli to not use gazillions of threads. So far our suspicion has been ZFS snapshot cleanups but this is an interesting contender - unless the whole "read only" part makes it moot. /Eirik
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?db9ad3bfcbc9b235f4845caba0ca6d7af0f0b091.camel>