Date: Sun, 10 Oct 2010 19:27:05 +0300 From: =?windows-1251?B?yu7t/Oru4iDF4uPl7ejp?= <kes-kes@yandex.ru> To: Ian Smith <smithi@nimnet.asn.au> Cc: freebsd-questions@freebsd.org Subject: Re[3]: How to obtain which interrupts cause system to hang? Message-ID: <632460655.20101010192705@yandex.ru> In-Reply-To: <20101010194711.Y2036@sola.nimnet.asn.au> References: <20101009204915.0360410656F1@hub.freebsd.org> <20101010161330.R2036@sola.nimnet.asn.au> <1076883893.20101010105041@yandex.ru> <20101010194711.Y2036@sola.nimnet.asn.au>
next in thread | previous in thread | raw e-mail | index | archive | help
Hi, Ian.
IS> On Sun, 10 Oct 2010, ??????? ??????? wrote:
>> >> #systat -v
>> >> 1 users Load 0.74 0.71 0.55 Oct 9 19:53
>> IS> [..]
>> >> Proc: Interrupts
>> >> r p d s w Csw Trp Sys Int Sof Flt 24 cow 2008 total
>> >> 2 3 39 23k 67 563 9 1710 47 15 zfod 9 ata0 irq14
>> >> ozfod nfe0 irq23
>> >> 23.1%Sys 50.8%Intr 1.3%User 0.0%Nice 24.8%Idle %ozfod 1999 cpu0: time
>> >> | | | | | | | | | | | daefr
>> >> ============+++++++++++++++++++++++++> 6 prcfr
>>
>> IS> Yes, system and esp. interrupt time is heavy .. 23k context switches!?
>>
>> IS> In addition to b. f.'s good advice .. as you later said, 2000 Hz slicing
>> IS> _should_ be ok, unless a slow CPU? Or perhaps a fast CPU throttled back
>> IS> too far .. powerd? Check sysctl dev.cpu.0.freq while this is happening.
>>
>> IS> Disable p4tcc if it's a modern CPU; that usually hurts more than helps.
>> IS> Disable polling if you're using that .. you haven't provided much info,
>> IS> like is this with any network load, despite nfe0 showing no interrupts?
>> Polling is ON. Traffice is about 60Mbit/s routed from nfe0 to vlan4 on rl0
>> when interrupts are happen traffic slow down to 25-30Mbit/s.
IS> Out of my depth. If it's a net problem - maybe not - you may do better
IS> in freebsd-net@ if you provide enough information (dmesg plus ifconfig,
IS> vmstat -i etc, normally and while this problem is happening).
>> There is no p4tcc option in KERNEL config file.
IS> No, it can be enabled by cpufreq(4). See dmesg for acpi_throttle or
IS> p4tcc, but it looks like you might not have device cpufreq in your
IS> kernel or loaded, or dev.cpu.0.freq and more would have shown below.
>> disable/enable polling does not help. situation still same.
>> sysctl -a | grep freq
>> kern.acct_chkfreq: 15
>> kern.timecounter.tc.i8254.frequency: 1193182
>> kern.timecounter.tc.ACPI-fast.frequency: 3579545
>> kern.timecounter.tc.TSC.frequency: 1809280975
>> net.inet.sctp.sack_freq: 2
>> debug.cpufreq.verbose: 0
>> debug.cpufreq.lowest: 0
>> machdep.acpi_timer_freq: 3579545
>> machdep.tsc_freq: 1809280975
>> machdep.i8254_freq: 1193182
IS> Only useful for what it doesn't show :)
>> >> How to obtain what nasty happen, which process take 36-50% of CPU
>> >> resource?
>>
>> IS> Try 'top -S'. It's almost certainly system process[es], not shown above.
IS> Does that not show anything? Also, something like 'ps auxww | less'
IS> should show you what's using all that CPU. I'm out of wild clues.
vpn_shadow# top -S
last pid: 57879; load averages: 0.12, 0.06, 0.05 up 1+18:37:39 19:19:14
101 processes: 2 running, 83 sleeping, 16 waiting
CPU: 0.0% user, 0.0% nice, 14.3% system, 17.3% interrupt, 68.4% idle
Mem: 319M Active, 799M Inact, 354M Wired, 336K Cache, 213M Buf, 503M Free
Swap: 4063M Total, 4063M Free
PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND
11 root 1 171 ki31 0K 16K RUN 24.9H 86.47% idle: cpu0
14 root 1 -44 - 0K 16K WAIT 689:52 10.25% swi1: net
2 root 1 -68 - 0K 16K sleep 207:35 4.69% ng_queue0
40 root 1 -68 - 0K 16K - 101:37 1.46% dummynet
47 root 1 20 - 0K 16K syncer 5:29 0.29% syncer
12 root 1 -32 - 0K 16K WAIT 14:48 0.00% swi4: clock sio
15 root 1 -16 - 0K 16K - 5:39 0.00% yarrow
986 root 1 44 0 5692K 1408K select 1:29 0.00% syslogd
1054 bind 4 4 0 138M 113M kqread 1:22 0.00% named
1162 clamav 1 4 0 4616K 1468K accept 0:59 0.00% smtp-gated
--
С уважением,
Коньков mailto:kes-kes@yandex.ru
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?632460655.20101010192705>
