From owner-freebsd-current@FreeBSD.ORG Fri Oct 17 01:34:46 2003 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 818D716A4B3 for ; Fri, 17 Oct 2003 01:34:46 -0700 (PDT) Received: from mailman.zeta.org.au (mailman.zeta.org.au [203.26.10.16]) by mx1.FreeBSD.org (Postfix) with ESMTP id 0C5FD43F93 for ; Fri, 17 Oct 2003 01:34:43 -0700 (PDT) (envelope-from bde@zeta.org.au) Received: from gamplex.bde.org (katana.zip.com.au [61.8.7.246]) by mailman.zeta.org.au (8.9.3p2/8.8.7) with ESMTP id SAA05561; Fri, 17 Oct 2003 18:34:29 +1000 Date: Fri, 17 Oct 2003 18:33:08 +1000 (EST) From: Bruce Evans X-X-Sender: bde@gamplex.bde.org To: Jeff Roberson In-Reply-To: <20031017022244.W30029-100000@mail.chesapeake.net> Message-ID: <20031017180118.U7662@gamplex.bde.org> References: <20031017022244.W30029-100000@mail.chesapeake.net> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII cc: current@freebsd.org Subject: Re: More ULE bugs fixed. X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Oct 2003 08:34:46 -0000 On Fri, 17 Oct 2003, Jeff Roberson wrote: > On Fri, 17 Oct 2003, Bruce Evans wrote: > > > How would one test if it was an improvement on the 4BSD scheduler? It > > is not even competitive in my simple tests. > > ... > > At one point ULE was at least as fast as 4BSD and in most cases faster. > This is a regression. I'll sort it out soon. How much faster? > > Test 5 for fair scheduling related to niceness: > > > > for i in -20 -16 -12 -8 -4 0 4 8 12 16 20 > > do > > nice -$i sh -c "while :; do echo -n;done" & > > done > > time top -o cpu > > > > With SCHED_ULE, this now hangs the system, but it worked yesterday. Today > > it doesn't get as far as running top and it stops the nfs server responding. > 661 root 112 -20 900K 608K RUN 0:24 27.80% 27.64% sh > 662 root 114 -16 900K 608K RUN 0:19 12.43% 12.35% sh > 663 root 114 -12 900K 608K RUN 0:15 10.66% 10.60% sh > 664 root 114 -8 900K 608K RUN 0:11 9.38% 9.33% sh > 665 root 115 -4 900K 608K RUN 0:10 7.91% 7.86% sh > 666 root 115 0 900K 608K RUN 0:07 6.83% 6.79% sh > 667 root 115 4 900K 608K RUN 0:06 5.01% 4.98% sh > 668 root 115 8 900K 608K RUN 0:04 3.83% 3.81% sh > 669 root 115 12 900K 608K RUN 0:02 2.21% 2.20% sh > 670 root 115 16 900K 608K RUN 0:01 0.93% 0.93% sh Perhaps the bug only affects SMP. The above is for UP (no CPU column). I see a large difference from the above, at least under SMP: %CPU tapers off to 0 at nice 0. BTW, I just noticed that SCHED_4BSD never really worked for the SMP case. sched_clock() is called for each CPU, and for N CPU's this has the same effect as calling sched_clock() N times too often for 1 CPU. Calling sched_clock() too often was fixed for the UP case in kern_synch.c 1.83 by introducing a scale factor. The scale factor is fixed so it doesn't help for SMP. > I think you cvsup'd at a bad time. I fixed a bug that would have caused > the system to lock up in this case late last night. On my system it > freezes for a few seconds and then returns. I can stop that by turning > down the interactivity threshold. No, I tested with an up to date kernel (sched_ule.c 1.65). Bruce