Date: Tue, 29 May 2007 20:15:08 -0700 (PDT) From: Jeff Roberson <jroberson@chesapeake.net> To: Bruce Evans <brde@optusnet.com.au> Cc: freebsd-arch@freebsd.org Subject: Re: rusage breakdown and cpu limits. Message-ID: <20070529201255.X661@10.0.0.1> In-Reply-To: <20070530125553.G12128@besplex.bde.org> References: <20070529105856.L661@10.0.0.1> <200705291456.38515.jhb@freebsd.org> <20070529121653.P661@10.0.0.1> <20070530065423.H93410@delplex.bde.org> <20070529141342.D661@10.0.0.1> <20070530125553.G12128@besplex.bde.org>
next in thread | previous in thread | raw e-mail | index | archive | help
On Wed, 30 May 2007, Bruce Evans wrote: > On Tue, 29 May 2007, Jeff Roberson wrote: > >> On Wed, 30 May 2007, Bruce Evans wrote: >> >>> On Tue, 29 May 2007, Jeff Roberson wrote: > >>>> a few cases where which will be complicated, and cpulimit is one of them. >>> >>> No, cpulimit is simple because it can be fuzzy, unlike calcru() which >>> require >>> the rusage to be up to date. >> >> cpulimit is complicated because it requires aggregate statistics from all >> threads like rusage. It may be queried infrequently however. It's just >> one of the few cases where we actually examine the values as if we still >> only have one thread per process. > > It still doesn't need very accurate statistics, unlike the others. > However, as you point out, almost all of the other cases are already more > aware of multiple threads and heavyweight to handle it (e.g., calcru() > already had a related accumulation loop until it was broken). cpulimit > is complicated and/or different because it shouldn't do heavyweight > accumulation. > >>> I see how rusage accumulation can help for everything _except_ the >>> runtime and tick counts (i.e., for stuff updated by statclock()). For >>> the runtime and tick counts, the possible savings seem to be small and >>> negative. calcru() would have to run the accumulation code and the >>> accumulation code would have to acquire something like sched_lock to >>> transfer the per-thread data (since the lock for updating that data >>> is something like sched_lock). This is has the same locking overheads >>> and larger non-locking overheads than accumulating the runtime directly >>> into the rusage at context switch time -- calcru() needs to acquire >>> something like sched_lock either way. >> >> Yes, it will make calcru() more expensive. However, this should be >> infrequent relative to context switches. It's only used for calls to >> getrusage(), fill_kinfo_proc(), and certain clock_gettime() calls. >> >> The thing that will protect mi_switch() is not process global. I want to >> keep process global locks out of mi_switch() or we reduce concurrency for >> multi-threaded applications. > > This became clearer with patches and would have been clearer with > (smaller) diffs in mail -- mi_switch() still needs locking but it isn't > sched locking. Hopefully you see the value in my approach now? I don't think it's turning out so badly, except for some details which need refining. It certainly make mi_switch() and statclock() cleaner. And hopefully we can remove more code from ast() and mi_switch() by changing the cpu limits. Jeff > > Bruce >
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20070529201255.X661>