Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 12 May 2003 14:04:50 +1000 (EST)
From:      Bruce Evans <bde@zeta.org.au>
To:        Don Bowman <don@sandvine.com>
Cc:        freebsd-net@freebsd.org
Subject:   RE: polling(4) and idle time/cpu usage percentages
Message-ID:  <20030512133324.C77949@gamplex.bde.org>
In-Reply-To: <FE045D4D9F7AED4CBFF1B3B813C8533701B3660D@mail.sandvine.com>
References:  <FE045D4D9F7AED4CBFF1B3B813C8533701B3660D@mail.sandvine.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sun, 11 May 2003, Don Bowman wrote:

> From: Bruce Evans [mailto:bde@zeta.org.au]
> > Did you try my hack?
>
> I tried the hack, as below. The other thing that makes idle
> wildly inaccurate is the symmetric multi-threading on the xeon
> (aka hyperthreading).

The variable needs to be per-cpu for the SMP case.  Perhaps there
are other complications for SMP (from having to forward clock interrupts).
Hyperthreading might increase them.  Anyway, get the !SMP case working
first.

> Index: kern_clock.c
> ===================================================================
> RCS file: /usr/cvs/src/sys/kern/kern_clock.c,v
> retrieving revision 1.105.2.9.1000.2
> diff -U3 -r1.105.2.9.1000.2 kern_clock.c
> --- kern_clock.c        13 Feb 2003 23:05:58 -0000      1.105.2.9.1000.2
> +++ kern_clock.c        10 May 2003 23:41:47 -0000
> @@ -68,6 +68,7 @@
>  #endif
>
>  #ifdef DEVICE_POLLING
> +extern int in_polling;

Per-cpu variables are complicated to initialized in RELENG_4.  I think
an array with index cpuid can be used with little cost here (cpuid is
a per-cpu global giving the cpu number).

> @@ -550,6 +551,11 @@
>                 } else if (p != NULL) {
>                         p->p_sticks++;
>                         cp_time[CP_SYS]++;
> +#if defined(DEVICE_POLLING)
> +               } else if (in_polling) {

Maybe in_polling[cpuid].

> +                       p->p_sticks++;

Don't incrememnt this.  p should always be NULL here.

> +                       cp_time[CP_SYS]++;
> +#endif
>                 } else
>                         cp_time[CP_IDLE]++;
>         }
> Index: kern_poll.c
> ===================================================================
> RCS file: /usr/cvs/src/sys/kern/kern_poll.c,v
> retrieving revision 1.2.2.4.1000.1
> diff -U3 -r1.2.2.4.1000.1 kern_poll.c
> --- kern_poll.c 10 Feb 2003 16:49:19 -0000      1.2.2.4.1000.1
> +++ kern_poll.c 10 May 2003 23:37:11 -0000
> @@ -54,6 +54,8 @@
>  void ether_poll(int);                  /* polling while in trap        */
>  int idle_poll(void);                   /* poll while in idle loop      */
>
> +int in_polling;
> +
>  /*
>   * Polling support for [network] device drivers.
>   *
> @@ -268,11 +270,13 @@
>  {
>         if (poll_in_idle_loop && poll_handlers > 0) {
>                 int s = splimp();
> +               in_polling = 1;
>                 enable_intr();
>                 ether_poll(poll_each_burst);
>                 disable_intr();
>                 splx(s);
>                 vm_page_zero_idle();
> +               in_polling = 0;
>                 return 1;
>         } else
>                 return vm_page_zero_idle();
>

Meybe set the variable for the whole function and name the variable
without using "polling" so that it counts work done by vm_page_zero_idle()
too.  The above is better if you just want to count network overhead.

Since the null pointer is apparently never followed in statclock(),
the above apparently doesn't work.  I think you aren't actually
calling it for the SMP case.  swtch.s has a separate idle loop for
the SMP case in RELENG_4.  Only the !SMP case calls the above unless
you have changed it.  So the idle time may actually be idle.

Bruce



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20030512133324.C77949>