Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 06 Oct 1996 04:43:20 +0800
From:      Peter Wemm <peter@spinner.dialix.com>
To:        Chris Csanady <ccsanady@friley216.res.iastate.edu>
Cc:        freebsd-smp@freebsd.org
Subject:   Re: Second processor does nothin?! 
Message-ID:  <199610052043.EAA01848@spinner.DIALix.COM>
In-Reply-To: Your message of "Sat, 05 Oct 1996 15:12:51 EST." <199610052012.PAA00251@friley216.res.iastate.edu> 

next in thread | previous in thread | raw e-mail | index | archive | help
Chris Csanady wrote:
> Oops... my bad.  I didnt notice that the second processor is not being starte
    d
> automatically anymore.  After I turn it on, and switch set things to run on
> the other cpu, things work fine.  Top shows that they are always running on
> CPU1.. :(  Id say something is Not Quite Right with the scheduling.  I will
> ponder it more I guess.  One thing I did notice was that the idle loops were
> both running on CPU0.. perhaps this is the problem.
> 
> Laters,
> Chris Csanady

I've tweaked it a bit more since I committed the top changes to the ports
collection:

Index: files/m_freebsd2.c
===================================================================
RCS file: /home/ncvs/ports/sysutils/top/files/m_freebsd2.c,v
retrieving revision 1.9
diff -u -r1.9 m_freebsd2.c
--- m_freebsd2.c	1996/10/05 13:42:31	1.9
+++ m_freebsd2.c	1996/10/05 16:22:48
@@ -20,8 +20,6 @@
  * $Id: m_freebsd2.c,v 1.9 1996/10/05 13:42:31 peter Exp $
  */
 
-
-
 #define LASTPID      /**/  /* use last pid, compiler depended */
 #define VM_REAL      /**/  /* use the same values as vmstat -s */
 #define USE_SWAP     /**/  /* use swap usage (pstat -s), 
@@ -128,12 +126,12 @@
  */
 
 static char header[] =
-  "  PID X        PRI NICE  SIZE   RES STATE    TIME   WCPU    CPU COMMAND";
+  "  PID X        PRI NICE SIZE   RES  STATE    TIME   WCPU    CPU COMMAND";
 /* 0123456   -- field to fill in starts at header+6 */
 #define UNAME_START 6
 
 #define Proc_format \
-	"%5d %-8.8s %3d %4d%6s %5s %-6.6s%7s %5.2f%% %5.2f%% %.14s"
+	"%5d %-8.8s %3d%4d%6s %5s %-7.7s%7s %5.2f%% %5.2f%% %.14s"
 
 
 /* process state names for the "STATE" column of the display */
@@ -561,23 +559,28 @@
 	case SRUN:
 #ifdef P_IDLEPROC	/* FreeBSD SMP kernel */
 	    if (PP(pp, p_oncpu) >= 0)
-		sprintf(status, "CPU%d/%d", PP(pp, p_oncpu), PP(pp, p_lastcpu));
+		sprintf(status, " CPU%d", PP(pp, p_oncpu));
 	    else
-		sprintf(status, "RUN/%d", PP(pp, p_lastcpu));
-#else
-	    strcpy(status, "RUN");
 #endif
+		strcpy(status, " RUN");
 	    break;
 	case SSLEEP:
 	    if (PP(pp, p_wmesg) != NULL) {
-		sprintf(status, "%.6s", EP(pp, e_wmesg));
+		sprintf(status, " %.6s", EP(pp, e_wmesg));
 		break;
 	    }
 	    /* fall through */
 	default:
-	    sprintf(status, "%.6s", state_abbrev[(unsigned char) PP(pp, p_stat)]);
+	    sprintf(status, " %.6s", state_abbrev[(unsigned char) PP(pp, p_stat)]);
 	    break;
     }
+#ifdef P_IDLEPROC	/* FreeBSD SMP kernel */
+    status[0] = PP(pp, p_lastcpu);
+    if (status[0] > 9)
+	status[0] += 'A';
+    else
+	status[0] += '0';
+#endif
 
     /* format this entry */
     sprintf(fmt,

This moves the columns a little to make room for the "lastcpu" field always.
It will show you a lot more about the scheduling habits, since you'll see
where the sleeping processes last ran.

There is quite a spread:

load averages:  0.81,  0.55,  0.58    04:35:14
65 processes:  4 running, 61 sleeping

Mem: 23M Active, 6756K Inact, 11M Wired, 5024K Cache, 3136K Buf, 704K Free
Swap: 160M Total, 14M Used, 146M Free, 9% Inuse

  PID USERNAME PRI NICE SIZE   RES  STATE    TIME   WCPU    CPU COMMAND
    6 root      -6   0    0K   12K 1RUN      0:00 37.23% 37.23% cpuidle1
    5 root      -6   0    0K   12K 1RUN      0:00 34.45% 34.45% cpuidle0
  341 root      92   0 1080K 1488K 1CPU1     0:29 16.15% 14.95% perl
 1382 peter     33   0  312K  988K 0CPU0     0:00  0.00%  0.00% top
    4 root      28   0    0K   12K 1update   0:06  0.00%  0.00% update
    3 root      28   0    0K   12K 0psleep   0:00  0.00%  0.00% vmdaemon
  212 peter     18   0  852K  908K 0pause    0:05  0.00%  0.00% tcsh
  213 peter     18   0  796K  820K 1pause    0:03  0.00%  0.00% tcsh
  211 root      18   0  860K  488K 1pause    0:05  0.00%  0.00% tcsh
  145 root      18   0  268K  356K 0pause    0:01  0.00%  0.00% cron
   22 root      10   0   20M  940K 1mfsidl   0:00  0.00%  0.00% mount_mfs
29339 peter     10   0  312K  780K 0wait     0:00  0.00%  0.00% repl
  340 root      10   0  320K  688K 1wait     0:00  0.00%  0.00% make
 1353 root      10   0  384K  460K 1wait     0:00  0.00%  0.00% make
  339 root      10   0  576K  228K 0wait     0:00  0.00%  0.00% sh
21776 root      10   0  576K  216K 0wait     0:00  0.00%  0.00% sh
21789 root      10   0  576K  216K 0wait     0:00  0.00%  0.00% sh
21780 root      10   0  576K  216K 0wait     0:00  0.00%  0.00% sh
    1 root      10   0  448K   76K 1wait     0:00  0.00%  0.00% init
  123 root      10   0  208K   12K 0nfsidl   0:00  0.00%  0.00% nfsiod

And, as you can see from this sample, top happened to be running on cpu0
while taking the snapshot, and both idleprocs had been on #1 last. Both 
cpus were actually running things during the snapshot.

I suspect that the reason things tend to run on cpu#1 first is because
cpu1 is never interrupted, except for traps generated by the process it
is currently executing.  This probably biases things somewhat, since when
a user-mode process starts up, if it begins on cpu0 it won't be long before
it's quantum expires on #0, and cpu#1 grabs it..  And it'll stay there as
long as it pleases.  This is probably enough to explain the bias.

Cheers,
-Peter



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199610052043.EAA01848>