From owner-freebsd-current@FreeBSD.ORG Wed Jul 16 14:49:04 2008 Return-Path: Delivered-To: current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 79BB41065672 for ; Wed, 16 Jul 2008 14:49:04 +0000 (UTC) (envelope-from barney_cordoba@yahoo.com) Received: from web63915.mail.re1.yahoo.com (web63915.mail.re1.yahoo.com [69.147.97.130]) by mx1.freebsd.org (Postfix) with SMTP id 43D0A8FC08 for ; Wed, 16 Jul 2008 14:49:04 +0000 (UTC) (envelope-from barney_cordoba@yahoo.com) Received: (qmail 13889 invoked by uid 60001); 16 Jul 2008 14:49:03 -0000 DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=Received:X-Mailer:Date:From:Reply-To:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type:Message-ID; b=QkY9/pxYniKN3mMBJ2n9M5tGx+hDvsd1iI+w3lyzfH/gP5+nqllCoHAcFVFdF8ArqRkq2tKq2+B5pY7gdClseRNp3B9PqbcuopFhzUpY2th0Jrqm6tJQ+YA1V+vkT34sZEBv2bljnx8m0TcHLjGNLqhZBNKfbxncedhJPfWbcBQ=; Received: from [98.203.28.38] by web63915.mail.re1.yahoo.com via HTTP; Wed, 16 Jul 2008 07:49:03 PDT X-Mailer: YahooMailWebService/0.7.218 Date: Wed, 16 Jul 2008 07:49:03 -0700 (PDT) From: Barney Cordoba To: Steve Kargl In-Reply-To: <20080715175944.GA80901@troutmask.apl.washington.edu> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Message-ID: <565436.13205.qm@web63915.mail.re1.yahoo.com> Cc: current@freebsd.org Subject: Re: ULE scheduling oddity X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: barney_cordoba@yahoo.com List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 16 Jul 2008 14:49:04 -0000 --- On Tue, 7/15/08, Steve Kargl wrote: > From: Steve Kargl > Subject: ULE scheduling oddity > To: freebsd-current@freebsd.org > Date: Tuesday, July 15, 2008, 1:59 PM > It appears that the ULE scheduler is not providing a fair > slice to running processes. > > I have a dual-cpu, quad-core opteron based system with > node21:kargl[229] uname -a > FreeBSD node21.cimu.org 8.0-CURRENT FreeBSD 8.0-CURRENT #3: > Wed Jun 4 16:22:49 PDT 2008 > kargl@node10.cimu.org:src/sys/HPC amd64 > > If I start exactly 8 processes, each gets 100% WCPU > according to > top. If I add to additional processes, then I observe > > last pid: 3874; load averages: 9.99, 9.76, 9.43 up > 0+19:54:44 10:51:18 > 41 processes: 11 running, 30 sleeping > CPU: 100% user, 0.0% nice, 0.0% system, 0.0% interrupt, > 0.0% idle > Mem: 5706M Active, 8816K Inact, 169M Wired, 84K Cache, 108M > Buf, 25G Free > Swap: 4096M Total, 4096M Free > > PID USERNAME THR PRI NICE SIZE RES STATE C > TIME WCPU COMMAND > 3836 kargl 1 118 0 577M 572M CPU7 7 > 6:37 100.00% kzk90 > 3839 kargl 1 118 0 577M 572M CPU2 2 > 6:36 100.00% kzk90 > 3849 kargl 1 118 0 577M 572M CPU3 3 > 6:33 100.00% kzk90 > 3852 kargl 1 118 0 577M 572M CPU0 0 > 6:25 100.00% kzk90 > 3864 kargl 1 118 0 577M 572M RUN 1 > 6:24 100.00% kzk90 > 3858 kargl 1 112 0 577M 572M RUN 5 > 4:10 78.47% kzk90 > 3855 kargl 1 110 0 577M 572M CPU5 5 > 4:29 67.97% kzk90 > 3842 kargl 1 110 0 577M 572M CPU4 4 > 4:24 66.70% kzk90 > 3846 kargl 1 107 0 577M 572M RUN 6 > 3:22 53.96% kzk90 > 3861 kargl 1 107 0 577M 572M CPU6 6 > 3:15 53.37% kzk90 > > I would have expected to see a more evenly distributed WCPU > of around > 80% for each process. So, do I need to tune one or more of > the > following sysctl values? Is this a side effect of cpu > affinity > being a tad too aggressive? > > node21:kargl[231] sysctl -a | grep sched | more > kern.sched.preemption: 1 > kern.sched.steal_thresh: 3 > kern.sched.steal_idle: 1 > kern.sched.steal_htt: 1 > kern.sched.balance_interval: 133 > kern.sched.balance: 1 > kern.sched.affinity: 1 > kern.sched.idlespinthresh: 4 > kern.sched.idlespins: 10000 > kern.sched.static_boost: 160 > kern.sched.preempt_thresh: 64 > kern.sched.interact: 30 > kern.sched.slice: 13 > kern.sched.name: ULE > > -- > Steve > _______________________________________________ > freebsd-current@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-current > To unsubscribe, send any mail to > "freebsd-current-unsubscribe@freebsd.org" I don't see why "equal" distribution is or should be a goal, as that does not guarantee optimization. Given that the cache is shared between only 2 cpus, it might very well be more efficient to run on 2 CPUs when the 3rd or 4th isn't needed. It works pretty darn well, IMO. Its not like your little app is the only thing going on in the system