Date: Tue, 3 Apr 2001 19:40:04 -0400 From: Bosko Milekic <bmilekic@technokratis.com> To: Matt Dillon <dillon@earth.backplane.com> Cc: Garrett Rooney <rooneg@electricjellyfish.net>, Alfred Perlstein <alfred@FreeBSD.org>, cvs-committers@FreeBSD.org, cvs-all@FreeBSD.org Subject: Re: cvs commit: src/sys/sys mbuf.h src/sys/kern uipc_mbuf.c Message-ID: <20010403194004.A15434@technokratis.com> In-Reply-To: <200104031813.f33ID4b58965@earth.backplane.com>; from dillon@earth.backplane.com on Tue, Apr 03, 2001 at 11:13:04AM -0700 References: <200104030315.f333FCX69312@freefall.freebsd.org> <20010403140457.B2952@electricjellyfish.net> <200104031813.f33ID4b58965@earth.backplane.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, Apr 03, 2001 at 11:13:04AM -0700, Matt Dillon wrote: > > :On Mon, Apr 02, 2001 at 08:15:12PM -0700, Alfred Perlstein wrote: > :> alfred 2001/04/02 20:15:12 PDT > :> > :> Modified files: > :> sys/sys mbuf.h > :> sys/kern uipc_mbuf.c > :> Log: > :> Use only one mutex for the entire mbuf subsystem. > : > :I can see how this makes some things cheaper by allowing you to only lock a > :single mutex instead of several, but doesn't it also limit you to only a > :single thread using the mbuf subsystem at a time? Since mbufs are used in a > :fairly large number of places throught the system, wouldn't that be bad? > : > :I'm sure this has been thought through, I'm just trying to understand why this > :will be better in the long run. Isn't the goal to have fine grained locking, > :rather than single locks limiting access to subsystems? > : > :-- > :garrett rooney Unix was not designed to stop you from > > What about using the BSDI hash-table-of-mutexes idea? Where mutex > functionality is overloaded to some degree for any given subsystem. > This gives us sufficient parallelism without polluting system structures > with their own mutexes. > > -Matt > The reason for the removal isn't related to pollution of system structures per se (i.e. bloat). There were really only three locks, one for each free list. The removal is, the way I see it, a slight pessimization in some cases at this moment, due to the increase of contention again, in some cases. The removal was done nonetheless because it is a step forward toward PCPU locks in the mbuf system. The idea is that prior to this removal, what was happening is that in the easy uncontended case of allocation, it was likely that `ping-ponging' from data cache to data cache would occur a maximum of three times per allocation. In that same easy case, with one lock, the cache invalidation will happen a maximum of once for what concerns the mutex in that same case. However, if we look at the contended difficult case, where at least two threads on two CPUs are in the process of allocating mbufs and clusters and counters, the situation is somewhat pessimized with the collapsing of three locks into one. The reason is that in this difficult case, the ping-ponging is likely to happen again a maximum of three times (the lock is dropped between allocations) but on top of that, contention is increased because instead of three locks, there is one. The pessimization of the latter case is acceptable if we consider the fact that we're moving toward per-CPU lists, in which case the tradeoff we have here will no longer be a tradeoff, as then contention is decreased overall, and it becomes really worthwile to go from three locks to one even if it were for mere simplicity's sake. The only time with PCPU lists where we won't be able to allocate will be when we preempt a thread on our CPU which has the mbuf lock, and it doesn't make much sense to have that thread continue for one little bit only to bounce from thread to thread uselessly). In other words, the decision to do this has been planned and discussed. :-) Regards, -- Bosko Milekic bmilekic@technokratis.com To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe cvs-all" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20010403194004.A15434>