Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 19 Aug 1998 07:40:15 +0200
From:      Lars =?iso-8859-1?Q?K=F6ller?= <Lars.Koeller@post.uni-bielefeld.de>
To:        Terry Lambert <tlambert@primenet.com>
Cc:        Lars.Koeller@post.uni-bielefeld.de (Lars =?iso-8859-1?Q?K=F6ller?=), chuckr@glue.umd.edu, freebsd-smp@FreeBSD.ORG
Subject:   Re: Per processor load? 
Message-ID:  <199808190540.FAA24476@mitch.hrz.uni-bielefeld.de>
In-Reply-To: tlambert's message of Tue, 18 Aug 1998 18:38:13 -0000. <199808181838.LAA20956@usr06.primenet.com> 

next in thread | previous in thread | raw e-mail | index | archive | help
----------

Hello Terry!

First of all thanks for the large answer!

In reply to Terry Lambert who wrote:
 
 > >  > Gathering this type of statistic could be actively harmful to CPU
 > >  > latency coming out of the HLT condition, and could be as high as 10%
 > >  > to 20% of the systems ability to do work.
 > > 
 > > The basic idea was to treat the CPU's as seperate systems each with 
 > > it's own load. This is well known from HPUX, Linux, Solaris, ...
 > > They display the following in, e.g. top:
 > > 
 > > System: share                                        Tue Aug 18 07:30:58 1
     998
 > > Load averages: 2.42, 2.29, 2.28
 > > 280 processes: 273 sleeping, 5 running, 2 zombies
 > > Cpu states:
 > > CPU   LOAD   USER   NICE    SYS   IDLE  BLOCK  SWAIT   INTR   SSYS
 > >  0    2.62   0.4%  97.6%   2.0%   0.0%   0.0%   0.0%   0.0%   0.0%
 > >  1    2.22   0.8%  97.0%   2.2%   0.0%   0.0%   0.0%   0.0%   0.0%
 > > ---   ----  -----  -----  -----  -----  -----  -----  -----  -----
 > > avg   2.42   0.6%  97.2%   2.2%   0.0%   0.0%   0.0%   0.0%   0.0%

Sorry, I forgot to mention this top was running on HPUX 10.20.

 > This basically implies a scheduler artifact; each CPU must have its own
 > ready-to-run queue for you to get this statistic; I'm sure that on
 > Solaris, at least, you have to know how to grovel /dev/kmem for the
 > information.
 > 
 > FreeBSD is symmetric.  That is, there is only one ready-to-run queue
 > for all processors.  Anything else would either result in potential
 > job starvation (inequity because on one processor, the jobs you are
 > competing with use 75% of their quantum, being compute intensive,
 > and on the other, they use only 10% of their quantum, being I/O
 > intensive).
 > 
 > ..... snip ...
 >
 > As far as INTR time goes, I notice it's not reported.  This is not
 > surprising.  In Symmetric (APIC) I/O, or "virtual wire mode", the
 > interrupt is directed to any available processor, lowest APIC ID
 > first (see the Intel MP Spec version 1.4).  It's really not possible,
 > unless you modify the ISR to record APIC ID, and reverse look it up
 > (an expensive operation) on each interrupt, to determine which CPU
 > is actually getting the interrupt.

You are right, only LOAD, USER, NICE, SYS, IDLE are displayed, the 
other are always zero!

 > 
 > I notice the other fields are not reported as well, probably for
 > similar reasons.
 > 
 > 
 > > Memory: 180344K (29336K) real, 256220K (66940K) virtual, 5160K free  Page#
      1/26
 > > 
 > > CPU TTY   PID USERNAME PRI NI   SIZE    RES STATE    TIME %WCPU  %CPU COMM
     AND
 > >  0    ? 19703 mcfutz   251 25   632K   116K run      6:05 80.27 80.13 schl
     u
 > >  1    ? 19721 physik   251 25   632K   112K run      4:52 49.42 49.34 proc
     ess
 > >  1    ?  5375 plond    251 25 34756K 15900K run   2173:38 46.66 46.58 l502
     .exe
 > 
 > Pretty obviously, there aren't two running process on that one CPU. A
 > CPU can be in user space in only one process at a time.  8-).

Grinnnn!

 > I think what they are doing, since they can tell you the CPU, is either
 > recording what CPU they last ran on, *or*, they are reporting which of
 > the multiple run queues that the program is on.
 > 
 > ... snip ...
 >
 > Unfortunately, displaying this information is complicated, in that
 > you have to know what you are displaying...

I see ... better I concentrate on porting xperfmon++ to libdevstat, to bring 
it up with the new CAM code!

Thanks again and best wishes

Lars
-- 
E-Mail:                                     |  Lars Köller
  Lars.Koeller@Uni-Bielefeld.DE              |  UNIX Sysadmin
  lkoeller@cc.FH-Lippe.DE                     |  Computing Center
PGP-key:                                       |  University of Bielefeld
  http://www.nic.surfnet.nl/pgp/pks-toplev.html |  Germany
----------- FreeBSD, what else? ---- http://www.freebsd.org -------------



To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-smp" in the body of the message



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199808190540.FAA24476>