From owner-freebsd-arch Sun Jan 26 1: 1:53 2003 Delivered-To: freebsd-arch@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id DBED537B401; Sun, 26 Jan 2003 01:01:51 -0800 (PST) Received: from mail.chesapeake.net (chesapeake.net [205.130.220.14]) by mx1.FreeBSD.org (Postfix) with ESMTP id ECC3643F3F; Sun, 26 Jan 2003 01:01:50 -0800 (PST) (envelope-from jroberson@chesapeake.net) Received: from localhost (jroberson@localhost) by mail.chesapeake.net (8.11.6/8.11.6) with ESMTP id h0Q91hT94869; Sun, 26 Jan 2003 04:01:43 -0500 (EST) (envelope-from jroberson@chesapeake.net) Date: Sun, 26 Jan 2003 04:01:43 -0500 (EST) From: Jeff Roberson To: Matthew Dillon Cc: Steve Kargl , Robert Watson , Gary Jennejohn , Subject: Re: New scheduler - Interactivity fixes In-Reply-To: <200301260843.h0Q8hgoZ030572@apollo.backplane.com> Message-ID: <20030126035429.A64928-100000@mail.chesapeake.net> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-freebsd-arch@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.ORG On Sun, 26 Jan 2003, Matthew Dillon wrote: > Ok, I've run some preliminary tests w/ ULE. It's a lot better > vis-a-vie interactive and batch operations. I think one > thing you can do to get better MP results is to add a bit of > code to sched_choose(). If sched_choose() cannot find any > KSEs to run on kseq->ksq_curr or kseq->ksq_next it should > search the other cpu's queues. I haven't tested your scheduler > with this but I note that without it KSEs are left bound to the cpu > they were originally scheduled on (cpu = ke->ke_oncpu in sched_add()), > which will create a lot of lost cycles on an SMP box. I actually have a local patch that does just this. It didn't improve the situation for my buildworld -j4 on a dual box. I'd like to leave the oncpu alone in sched_fork() as you suggest and instead move it to a new call, sched_exec(). My logical here is that since you're completely replacing the vm space you lose any locality advantage so might as well pick the least loaded cpu. I think we need a push and a pull. The push could run periodically and sort the load of all cpus then see how far off the least and most loaded are. I need to come up with some metric that is more interesting than the number of entries in the runq for the load balancing though. It doesn't take into consideration the priority spread. Also, the run queue depth can change so quickly if processes are doing lots of IO. It would be nice to have something like the total slice time of all runnable processes and processes sleeping for a very short period of time on a given cpu. Since the slice size is related to the priority, you would get a much more even load that way. Anyway, this all needs lots of experimentation. I was working on that until the interactivity issues were brought to my attention. It looks like that is satisfactory now, so I'm going to go back to mp. Anyway, keep the good ideas coming! > > My gut feeling is that sched_choose() is the best place to deal with > this and sched_add() should be left as-is. > > I also think you can completely remove sched_pickcpu() without > any detrimental effects (test that!). Just have the sched_fork() > code leave ke_oncpu alone (like 4bsd does). My gut feeling is > that additional work on sched_choose() will yield the best > improvement. > > I'll have some comparative buildworld numbers tomorrow, I've run > out of time tonight. > > -Matt > Matthew Dillon > > To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-arch" in the body of the message