From owner-freebsd-current@FreeBSD.ORG Tue Sep 5 08:26:17 2006 Return-Path: X-Original-To: current@freebsd.org Delivered-To: freebsd-current@FreeBSD.ORG Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 09E7916A4FE for ; Tue, 5 Sep 2006 08:26:17 +0000 (UTC) (envelope-from dom@goodforbusiness.co.uk) Received: from mailhost.graphdata.co.uk (mailhost.graphdata.co.uk [195.12.22.194]) by mx1.FreeBSD.org (Postfix) with ESMTP id 737A643D4C for ; Tue, 5 Sep 2006 08:26:15 +0000 (GMT) (envelope-from dom@goodforbusiness.co.uk) Received: from localhost (localhost [127.0.0.1]) by mailhost.graphdata.co.uk (Postfix) with ESMTP id A6DD1114026; Tue, 5 Sep 2006 09:26:13 +0100 (BST) X-Virus-Scanned: amavisd-new at graphdata.co.uk Received: from mailhost.graphdata.co.uk ([127.0.0.1]) by localhost (mailhost.graphdata.co.uk [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 3b1lfmPVoyZP; Tue, 5 Sep 2006 09:26:11 +0100 (BST) Received: from [192.168.0.86] (gdc083.internal.graphdata.co.uk [192.168.0.86]) by mailhost.graphdata.co.uk (Postfix) with ESMTP id 1D0EA11401E; Tue, 5 Sep 2006 09:26:11 +0100 (BST) Message-ID: <44FD34A3.2090101@goodforbusiness.co.uk> Date: Tue, 05 Sep 2006 09:26:11 +0100 From: Dominic Marks User-Agent: Thunderbird 1.5.0.5 (X11/20060809) MIME-Version: 1.0 To: vova@fbsd.ru References: <1157442358.2048.6.camel@localhost> In-Reply-To: <1157442358.2048.6.camel@localhost> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Mailman-Approved-At: Tue, 05 Sep 2006 11:26:04 +0000 Cc: current Subject: Re: wired top (and others) behavior - broken CPU usage reporting ? X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 05 Sep 2006 08:26:17 -0000 Vladimir Grebenschikov wrote: > Hi > > I have notice that it is no more possible to find what process is eats > all CPU time with top, vmstat and tools like that. This happens with > current some (big) time ago. > > See here vmstat (vm mode) output: > > 3.3%Sys 0.8%Intr 95.9%User 0.0%Nice 0.0%Idle %ozfod 128 > rtc irq8 > | | | | | | | | | | | daefr 66 > cbb0 pcm0+ > ==>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> prcfr 1 > ata0 irq14 > 33 dtbuf 47 totfr > ata1 irq15 > Namei Name-cache Dir-cache 69694 desvn react > Calls hits % hits % 15144 numvn pdwak > 1259 1259 100 9063 frevn pdpgs > intrn > > 99.5% CPU used in user space. > > now top output (sorted by CPU): > last pid: 2024; load averages: 1.03, 0.65, 0.40 up 0+01:28:34 > 11:38:50 > 120 processes: 4 running, 116 sleeping > CPU states: 95.7% user, 0.0% nice, 3.5% system, 0.8% interrupt, 0.0% > idle > Mem: 589M Active, 209M Inact, 146M Wired, 512K Cache, 111M Buf, 46M Free > Swap: 1200M Total, 1200M Free > > PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU > COMMAND > 927 root 1 97 0 52084K 28172K select 1:02 2.98% Xorg > 1128 vova 1 96 0 21172K 13676K select 0:10 0.98% > metacity > 1252 vova 1 96 0 38024K 29884K select 1:56 0.63% > skype_bin > 1386 vova 11 126 0 516M 470M RUN 3:42 0.00% > evolution-2.6 > 1134 vova 3 20 0 21540K 13496K kserel 1:38 0.00% > gkrellm > 1113 vova 1 96 0 19592K 9604K select 0:17 0.00% > at-spi-registryd > 1327 vova 1 96 0 34464K 24668K select 0:10 0.00% sim > 1323 vova 1 96 0 24552K 17212K select 0:09 0.00% > cpufreq-applet > ... > > > No any idea who eats these 95.5% of CPU time. > Same picture on vmstat's pigs screen. > > I know that this process is actually evolution, and if I kill it system > load drops, but why it is not shown by top (and other). As I understand it if the application is working in a thread then the cpu statistics are not available. So a threaded program can often be using 100% cpu and show up as barely any in top. Then you have to try and guess which one of the processes it likely is. I get this a lot on my desktop and it is very annoying. As far as I know it cannot be easily fixed. > Any hints about it ? > Thanks, Dominic