Date: Mon, 13 Nov 1995 23:38:10 -0800 From: David Greenman <davidg@Root.COM> To: Julian Elischer <julian@ref.tfs.com> Cc: uhclem%nemesis@fw.ast.com, simonm@dcs.gla.ac.uk, current@freebsd.org Subject: Re: Disk I/O that binds Message-ID: <199511140738.XAA00133@corbin.Root.COM> In-Reply-To: Your message of "Mon, 13 Nov 95 23:25:12 PST." <199511140725.XAA26807@ref.tfs.com>
next in thread | previous in thread | raw e-mail | index | archive | help
>Actually here's an answer that might tackle it from another point.. >if the raised priority that a process get's after getting a block >is lower that the raised priority that a process gets after being suspended > for a second or two, then >in a busy system, whenever processes start getting held up the hog process >will start to lose out a little more.. and maybe if it get's its read of the >read-ahead buffer in just a little later, the head might have got a chance to >get past it in which case it will have to go all the way around again.. >at least in this method, an un busy system has less degradation.. > > >processes that read a lot are constantly cycling >into high priority after every read, but the sleeps are so short that >these processes are basically permanently at raised priority.. >especially with read-ahead and track caches making the disks so dammed fast Yes, this was the theory behind Matt Dillon's patches if my memory is correct (it might very well not be :-)). I think the problem is a composite of a variety of things. -DG
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199511140738.XAA00133>