From owner-freebsd-net@FreeBSD.ORG Sun May 11 21:05:04 2003 Return-Path: Delivered-To: freebsd-net@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 2C93237B401 for ; Sun, 11 May 2003 21:05:04 -0700 (PDT) Received: from mailman.zeta.org.au (mailman.zeta.org.au [203.26.10.16]) by mx1.FreeBSD.org (Postfix) with ESMTP id CBF7843FB1 for ; Sun, 11 May 2003 21:05:02 -0700 (PDT) (envelope-from bde@zeta.org.au) Received: from katana.zip.com.au (katana.zip.com.au [61.8.7.246]) by mailman.zeta.org.au (8.9.3p2/8.8.7) with ESMTP id OAA00621; Mon, 12 May 2003 14:04:51 +1000 Date: Mon, 12 May 2003 14:04:50 +1000 (EST) From: Bruce Evans X-X-Sender: bde@gamplex.bde.org To: Don Bowman In-Reply-To: Message-ID: <20030512133324.C77949@gamplex.bde.org> References: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII cc: freebsd-net@freebsd.org Subject: RE: polling(4) and idle time/cpu usage percentages X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 12 May 2003 04:05:04 -0000 On Sun, 11 May 2003, Don Bowman wrote: > From: Bruce Evans [mailto:bde@zeta.org.au] > > Did you try my hack? > > I tried the hack, as below. The other thing that makes idle > wildly inaccurate is the symmetric multi-threading on the xeon > (aka hyperthreading). The variable needs to be per-cpu for the SMP case. Perhaps there are other complications for SMP (from having to forward clock interrupts). Hyperthreading might increase them. Anyway, get the !SMP case working first. > Index: kern_clock.c > =================================================================== > RCS file: /usr/cvs/src/sys/kern/kern_clock.c,v > retrieving revision 1.105.2.9.1000.2 > diff -U3 -r1.105.2.9.1000.2 kern_clock.c > --- kern_clock.c 13 Feb 2003 23:05:58 -0000 1.105.2.9.1000.2 > +++ kern_clock.c 10 May 2003 23:41:47 -0000 > @@ -68,6 +68,7 @@ > #endif > > #ifdef DEVICE_POLLING > +extern int in_polling; Per-cpu variables are complicated to initialized in RELENG_4. I think an array with index cpuid can be used with little cost here (cpuid is a per-cpu global giving the cpu number). > @@ -550,6 +551,11 @@ > } else if (p != NULL) { > p->p_sticks++; > cp_time[CP_SYS]++; > +#if defined(DEVICE_POLLING) > + } else if (in_polling) { Maybe in_polling[cpuid]. > + p->p_sticks++; Don't incrememnt this. p should always be NULL here. > + cp_time[CP_SYS]++; > +#endif > } else > cp_time[CP_IDLE]++; > } > Index: kern_poll.c > =================================================================== > RCS file: /usr/cvs/src/sys/kern/kern_poll.c,v > retrieving revision 1.2.2.4.1000.1 > diff -U3 -r1.2.2.4.1000.1 kern_poll.c > --- kern_poll.c 10 Feb 2003 16:49:19 -0000 1.2.2.4.1000.1 > +++ kern_poll.c 10 May 2003 23:37:11 -0000 > @@ -54,6 +54,8 @@ > void ether_poll(int); /* polling while in trap */ > int idle_poll(void); /* poll while in idle loop */ > > +int in_polling; > + > /* > * Polling support for [network] device drivers. > * > @@ -268,11 +270,13 @@ > { > if (poll_in_idle_loop && poll_handlers > 0) { > int s = splimp(); > + in_polling = 1; > enable_intr(); > ether_poll(poll_each_burst); > disable_intr(); > splx(s); > vm_page_zero_idle(); > + in_polling = 0; > return 1; > } else > return vm_page_zero_idle(); > Meybe set the variable for the whole function and name the variable without using "polling" so that it counts work done by vm_page_zero_idle() too. The above is better if you just want to count network overhead. Since the null pointer is apparently never followed in statclock(), the above apparently doesn't work. I think you aren't actually calling it for the SMP case. swtch.s has a separate idle loop for the SMP case in RELENG_4. Only the !SMP case calls the above unless you have changed it. So the idle time may actually be idle. Bruce