Date: Fri, 4 Feb 2011 13:28:22 +1100 (EST) From: Bruce Evans <brde@optusnet.com.au> To: mdf@FreeBSD.org Cc: Juli Mallett <jmallett@FreeBSD.org>, svn-src-head@FreeBSD.org, svn-src-all@FreeBSD.org, src-committers@FreeBSD.org, John Baldwin <jhb@FreeBSD.org> Subject: Re: svn commit: r218195 - in head/sys: amd64/amd64 arm/arm i386/i386 ia64/ia64 kern mips/mips powerpc/powerpc sparc64/sparc64 sun4v/sun4v sys ufs/ffs Message-ID: <20110204125820.Q935@besplex.bde.org> In-Reply-To: <AANLkTinhpj=V_XOp3b15Rr5J%2BMzOpO3=YbXLkmoSF1gM@mail.gmail.com> References: <201102021635.p12GZA94015170@svn.freebsd.org> <AANLkTi=5jDcYAfuoWtgDTUk__JJK222efBd9YgPq6hsf@mail.gmail.com> <201102030750.07076.jhb@freebsd.org> <AANLkTinhpj=V_XOp3b15Rr5J%2BMzOpO3=YbXLkmoSF1gM@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, 3 Feb 2011 mdf@FreeBSD.org wrote: > Bruce correctly points out that the code doesn't work like I expect > with PREEMPTION, which most people will be running. Not just PREEMPTION, but with almost any non-fast^Wfiltered interrupt activity. > I'm thinking of adding a new per-thread field to record the last ticks > value that a voluntary mi_switch() was done, so that there's a > standard way of checking if a thread is being a hog; this will work > for both PREEMPTION and !PREEMPTION, and would be appropriate for the > places that previously used a counter. (This would require > uio_yield() to be SW_VOL, but I can't see why it's not a voluntary > context switch anyways). I don't like using a ticks value for this at all. It gives complexities and doing the scheduler's work for it. If you don't count involuntary context switches, then the ticks spent by involuntarily-switch-to threads will be counted against the hog thread. And switches back from these threads are probably voluntary (this is the case for ithreads), so you would need complexities to not reset the last ticks values for some voluntary context switches too. A perfectly fair way to keep track of hoggishness might be to monitor the thread's runtime and yield if this is too large a percentage of the real time, but this might be complex and is doing the scheduler's work for it (better than the scheduler does -- schedulers still use ticks, but the runtime is much more accurate). OTOH, yielding on every tick might work well. This is equivalent to reducing hogticks to 1 and doesn't need an externally maintained last- tick value. Just do an atomic cmpset of `ticks' with a previous value and yield if it changed. This could probably be used for increments of larger than 1 too. But I now remember that the hogticks checks are intentionally not done like this, so that they can be as small and efficient as possible and not need local state or a function call. I must have expected them to be used more. The reason to consider yielding on every tick is that 2 quanta (200 mS) isn't as long as it was when it was first used for hogticks. Back then, memory speeds were maybe 50 MB/S at best and you could reach hogticks limit simply by reading a few MB from /dev/zero. > I'm happy to rename the functions (perhaps just yield_foo() rather > than foo_yield()?) and stop using uio_yield as the base name since > it's not a uio function. I wanted to keep the uio_yield symbol to > preserve the KBI/KPI. Errors should not be preserved. Bruce
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20110204125820.Q935>