Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 16 Jul 2008 07:49:03 -0700 (PDT)
From:      Barney Cordoba <barney_cordoba@yahoo.com>
To:        Steve Kargl <sgk@troutmask.apl.washington.edu>
Cc:        current@freebsd.org
Subject:   Re: ULE scheduling oddity
Message-ID:  <565436.13205.qm@web63915.mail.re1.yahoo.com>
In-Reply-To: <20080715175944.GA80901@troutmask.apl.washington.edu>

index | next in thread | previous in thread | raw e-mail



--- On Tue, 7/15/08, Steve Kargl <sgk@troutmask.apl.washington.edu> wrote:

> From: Steve Kargl <sgk@troutmask.apl.washington.edu>
> Subject: ULE scheduling oddity
> To: freebsd-current@freebsd.org
> Date: Tuesday, July 15, 2008, 1:59 PM
> It appears that the ULE scheduler is not providing a fair 
> slice to running processes.
> 
> I have a dual-cpu, quad-core opteron based system with
> node21:kargl[229] uname -a
> FreeBSD node21.cimu.org 8.0-CURRENT FreeBSD 8.0-CURRENT #3:
> Wed Jun  4 16:22:49 PDT 2008  
> kargl@node10.cimu.org:src/sys/HPC  amd64
> 
> If I start exactly 8 processes, each gets 100% WCPU
> according to
> top.  If I add to additional processes, then I observe
> 
> last pid:  3874;  load averages:  9.99,  9.76,  9.43    up
> 0+19:54:44  10:51:18
> 41 processes:  11 running, 30 sleeping
> CPU:  100% user,  0.0% nice,  0.0% system,  0.0% interrupt,
>  0.0% idle
> Mem: 5706M Active, 8816K Inact, 169M Wired, 84K Cache, 108M
> Buf, 25G Free
> Swap: 4096M Total, 4096M Free
> 
>   PID USERNAME    THR PRI NICE   SIZE    RES STATE  C  
> TIME   WCPU COMMAND
>  3836 kargl         1 118    0   577M   572M CPU7   7  
> 6:37 100.00% kzk90
>  3839 kargl         1 118    0   577M   572M CPU2   2  
> 6:36 100.00% kzk90
>  3849 kargl         1 118    0   577M   572M CPU3   3  
> 6:33 100.00% kzk90
>  3852 kargl         1 118    0   577M   572M CPU0   0  
> 6:25 100.00% kzk90
>  3864 kargl         1 118    0   577M   572M RUN    1  
> 6:24 100.00% kzk90
>  3858 kargl         1 112    0   577M   572M RUN    5  
> 4:10 78.47% kzk90
>  3855 kargl         1 110    0   577M   572M CPU5   5  
> 4:29 67.97% kzk90
>  3842 kargl         1 110    0   577M   572M CPU4   4  
> 4:24 66.70% kzk90
>  3846 kargl         1 107    0   577M   572M RUN    6  
> 3:22 53.96% kzk90
>  3861 kargl         1 107    0   577M   572M CPU6   6  
> 3:15 53.37% kzk90
> 
> I would have expected to see a more evenly distributed WCPU
> of around
> 80% for each process.  So, do I need to tune one or more of
> the 
> following sysctl values?  Is this a side effect of cpu
> affinity
> being a tad too aggressive?
> 
> node21:kargl[231] sysctl -a | grep sched | more
> kern.sched.preemption: 1
> kern.sched.steal_thresh: 3
> kern.sched.steal_idle: 1
> kern.sched.steal_htt: 1
> kern.sched.balance_interval: 133
> kern.sched.balance: 1
> kern.sched.affinity: 1
> kern.sched.idlespinthresh: 4
> kern.sched.idlespins: 10000
> kern.sched.static_boost: 160
> kern.sched.preempt_thresh: 64
> kern.sched.interact: 30
> kern.sched.slice: 13
> kern.sched.name: ULE
> 
> -- 
> Steve
> _______________________________________________
> freebsd-current@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-current
> To unsubscribe, send any mail to
> "freebsd-current-unsubscribe@freebsd.org"

I don't see why "equal" distribution is or should be a goal, as that does not guarantee optimization. Given that the cache is shared between only 2 cpus, it might very well be more efficient to run on 2 CPUs when the 3rd or 4th isn't needed.

It works pretty darn well, IMO. Its not like your little app is the only thing going on in the system 


      


home | help

Want to link to this message? Use this
URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?565436.13205.qm>