Date: Tue, 16 Oct 2001 11:00:26 -0700 (PDT) From: Matthew Dillon <dillon@apollo.backplane.com> To: John Baldwin <jhb@FreeBSD.org> Cc: Dag-Erling Smorgrav <des@ofug.org>, cvs-all@FreeBSD.org, cvs-committers@FreeBSD.org, Bruce Evans <bde@zeta.org.au> Subject: Re: cvs commit: src/sys/vm vnode_pager.c Message-ID: <200110161800.f9GI0QP33797@apollo.backplane.com> References: <XFMail.011016103432.jhb@FreeBSD.org>
next in thread | previous in thread | raw e-mail | index | archive | help
:> :> BTW, the profiling data show that mutex debugging actually have very :> little impact - 88% of CPU time is spent in _mtx_unlock_spin_flags(), :> and only 0.6% in witness_unlock() (which _mtx_unlock_spin_flags() :> calls). Witness functions amount to about 3% all told, _mtx_assert() :> accounts for 0.1%. The problem isn't mutex debugging, or mutex :> handling at all - the problem is that ffs_fsync() has an insane amount :> of work to do, most of which is probably bogus. : :How about turning off INVARIANTS and WITNESS and seeing if it does better? :Witness did become more expensive when I made it work for reader/writer locks :on May 4 as it has to manage a list of lock instances instead of embedding the :list inside the lock object itself. : :-- : :John Baldwin <jhb@FreeBSD.org> -- http://www.FreeBSD.org/~jhb/ (reply-to to hackers) Just calling a mutex function hundreds of thousands of times is going to be enough to cause the problem. I see a way we can fix the problem... the mount point lock is locking the list, so if the information we need to determine whether we have work to do is stored in the vnode itself we can check and loop without having to mess around with any more mutexes. If the flag says that there may be something to do, *then* we do the hard work. -Matt To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe cvs-all" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200110161800.f9GI0QP33797>