Date: Sat, 22 Dec 2001 03:13:49 -0500 From: Jake Burkholder <jake@locore.ca> To: Bruce Evans <bde@zeta.org.au> Cc: Luigi Rizzo <rizzo@aciri.org>, John Baldwin <jhb@FreeBSD.ORG>, current@FreeBSD.ORG, Peter Wemm <peter@wemm.org> Subject: Re: vm_zeropage priority problems. Message-ID: <20011222031349.B62219@locore.ca> In-Reply-To: <20011222183040.E7393-100000@gamplex.bde.org>; from bde@zeta.org.au on Sat, Dec 22, 2001 at 06:48:26PM %2B1100 References: <20011221095058.A17968@iguana.aciri.org> <20011222183040.E7393-100000@gamplex.bde.org>
next in thread | previous in thread | raw e-mail | index | archive | help
Apparently, On Sat, Dec 22, 2001 at 06:48:26PM +1100,
Bruce Evans said words to the effect of;
> On Fri, 21 Dec 2001, Luigi Rizzo wrote:
>
> > Don't know how interesting this can be, but i am writing
> > (no plans to commit it, unless people find it interesting)
> > some code to implement a weight-based instead of priority-based
> > scheduler. The code is basically the WF2Q+ scheme which is
> > already part of dummynet, adapted to processes.
> > It is quite compact, and i think i can make it reasonably
> > compatible with the old scheme, i.e. a sysctl var can be
> > used to switch between one and the other with reasonably
> > little overhead.
> >
> > This would help removing the ugly property that priority-based
> > have, which is that one process can starve the rest of the system.
>
> Only broken priority-based schedulers have that property. One of
> my incomplete fixes uses weights:
>
> Index: kern_synch.c
> ===================================================================
> RCS file: /home/ncvs/src/sys/kern/kern_synch.c,v
> retrieving revision 1.167
> diff -u -2 -r1.167 kern_synch.c
> --- kern_synch.c 18 Dec 2001 00:27:17 -0000 1.167
> +++ kern_synch.c 19 Dec 2001 16:01:26 -0000
> @@ -936,18 +1058,18 @@
> struct thread *td;
> {
> - struct kse *ke = td->td_kse;
> - struct ksegrp *kg = td->td_ksegrp;
> + struct ksegrp *kg;
>
> - if (td) {
> - ke->ke_cpticks++;
> - kg->kg_estcpu = ESTCPULIM(kg->kg_estcpu + 1);
> - if ((kg->kg_estcpu % INVERSE_ESTCPU_WEIGHT) == 0) {
> - resetpriority(td->td_ksegrp);
> - if (kg->kg_pri.pri_level >= PUSER)
> - kg->kg_pri.pri_level = kg->kg_pri.pri_user;
> - }
> - } else {
> + if (td == NULL)
> panic("schedclock");
> - }
> + td->td_kse->ke_cpticks++;
> + kg = td->td_ksegrp;
> +#ifdef NEW_SCHED
> + kg->kg_estcpu += niceweights[kg->kg_nice - PRIO_MIN];
> +#else
> + kg->kg_estcpu++;
> +#endif
> + resetpriority(kg);
> + if (kg->kg_pri.pri_level >= PUSER)
> + kg->kg_pri.pri_level = kg->kg_pri.pri_user;
> }
I'm curious why you removed the ESTCPULIM and INVERSE_ESTCPU_WEIGHT
calculations even in the OLD_SCHED case. Do these turn out to have
no effect in general?
>
> Most of the changes here are to fix style bugs. In the NEW_SCHED case,
> the relative weights for each priority are determined by the niceweights[]
> table. kg->kg_estcpu is limited only by INT_MAX and priorities are
> assigned according to relative values of kg->kg_estcpu (code for this is
> not shown). The NEW_SCHED case has not been tried since before SMPng
> broke scheduling some more by compressing the priority ranges.
It is relatively easy to uncompress the priority ranges if that is
desirable. What range is best?
Jake
To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-current" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20011222031349.B62219>
