Date: Mon, 11 Sep 2023 00:26:15 -0400 From: Garrett Wollman <wollman@bimajority.org> To: Mateusz Guzik <mjguzik@gmail.com> Cc: freebsd-stable@freebsd.org Subject: Re: Did something change with ZFS and vnode caching? Message-ID: <25854.38631.998872.484927@hergotha.csail.mit.edu> In-Reply-To: <CAGudoHGe-kfBs3COOt0kEYMLy%2BwX0OJWy52ery=BJVKbHW4N1g@mail.gmail.com> References: <25827.33600.611577.665054@hergotha.csail.mit.edu> <25831.30103.446606.733311@hergotha.csail.mit.edu> <25840.58487.468791.344785@hergotha.csail.mit.edu> <CAGudoHGX5yShLqkOby7_X%2B=aeA_evqvLU-u1d6OiSMuX4jAhyg@mail.gmail.com> <25853.10676.45028.623279@hergotha.csail.mit.edu> <CAGudoHGe-kfBs3COOt0kEYMLy%2BwX0OJWy52ery=BJVKbHW4N1g@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
<<On Sun, 10 Sep 2023 12:13:09 +0200, Mateusz Guzik <mjguzik@gmail.com> said: > Not perfect but you can probably narrow it down with dtrace as is: > dtrace -n 'lockstat:::adaptive-spin,lockstat:::rw-spin,lockstat:::sx-spin > { @[stack(), stringof(args[0]->lock_object.lo_name)] = count(); }' That was ... interesting. It took a bit of postprocessing, but I was able to make a flame chart from that: <https://people.csail.mit.edu/wollman/contention.svg> Unsurprisingly, the heaviest hitter is the vnode_list mutex, although it's only about 35% of contention events. After that it seems to be UMA locks in the ZFS I/O path. You can barely see vnlru in here, and most of the contention events are in UMA or the VM system, not the vnode_list. -GAWollman
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?25854.38631.998872.484927>