Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 28 Oct 2002 14:02:23 +0000
From:      Ian Dowse <iedowse@maths.tcd.ie>
To:        Kirk McKusick <mckusick@beastie.mckusick.com>
Cc:        Poul-Henning Kamp <phk@critter.freebsd.dk>, Dan Nelson <dnelson@allantgroup.com>, cvs-committers@FreeBSD.org, cvs-all@FreeBSD.org
Subject:   Re: cvs commit: src/sys/fs/specfs spec_vnops.c src/sys/kern subr_disk.c src/sys/ufs/ffs ffs_snapshot.c 
Message-ID:   <200210281402.aa88468@salmon.maths.tcd.ie>
In-Reply-To: Your message of "Sun, 27 Oct 2002 22:36:55 PST." <200210280636.g9S6at59021626@beastie.mckusick.com> 

next in thread | previous in thread | raw e-mail | index | archive | help
In message <200210280636.g9S6at59021626@beastie.mckusick.com>, Kirk McKusick wr
ites:
>The point of running a process niced is so that it will not hog
>resources on your system. before this change, running find niced
>had no effect because it was totally I/O bound. With this change
>a niced find will run slower. If you want it to run at full speed,
>why are you nice'ing it? And, yes, nice'ing the background fsck
>by plus 4 will slow it down by a factor of about 6. Again,
>that is the point. Reduce its demands on the system. There is no
>particular rush in having it finish, and if you run it full bore,
>other activity on the system will be severely impacted. Again, if
>getting background fsck done as quickly as possible is the desire,
>then don't run it nice'd. The whole point of this change is to
>make nice have an effect on positively niced I/O bound jobs. It
>is doing exactly what it is supposed to do. What is all the
>uproar about?

The current problem is that an interaction between this mechanism
and snapshots is causing all I/O on the same filesystem as a
background fsck to grind to a halt too, even if it is not niced.
Sorry for not providing more details - I think the background fsck
I/O must be sleeping while holding the filesystem suspended or
something.

As for having the nice level affect I/O in general, I think this
is a great idea, but even without the snapshot problem mentioned
above, the current implementation slows things down too much in
many cases and its effect is currently too dependent on the value
of `hz'. A single process running with a nice level of +20 on an
otherwise idle system gets 100% of the available CPU. I did a quick
test here, and with hz=100, an I/O-bound process with a nice level
of +20 on an otherwise idle system got 1.5% of the available I/O
bandwidth.

Obviously we cannot hope to achieve anything like the same level
of control over I/O as we can for process scheduling, so some
performance reduction of niced processes on an idle system will be
inevitable. I had some success in the the userland version by
measuring the average I/O delay (T) of recent requests, and then
delaying 1 out of N I/O operations by (K-1)*N*T, where K is a
slowdown factor, and N is chosen to be large enough for the delay
to be at least a few `hz' periods. Taking into account the I/O load
from other nice levels would probably help a lot too. I haven't
looked to see how hard any of this would be to implement in the
kernel though...

Ian

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe cvs-all" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200210281402.aa88468>