Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 17 Jan 2012 00:09:16 +0000
From:      "Robert N. M. Watson" <rwatson@FreeBSD.org>
To:        =?utf-8?B?0JrQvtC90YzQutC+0LIg0JXQstCz0LXQvdC40Lk=?= <kes-kes@yandex.ru>
Cc:        freebsd-bugs@FreeBSD.org, bz@FreeBSD.org
Subject:   Re: misc/164130: broken netisr initialization
Message-ID:  <737885D7-5DC2-4A0D-A5DF-4A380D035648@FreeBSD.org>
In-Reply-To: <68477246.20120115000025@yandex.ru>
References:  <201201142126.q0ELQVbZ087496@freefall.freebsd.org> <68477246.20120115000025@yandex.ru>

next in thread | previous in thread | raw e-mail | index | archive | help

On 14 Jan 2012, at 22:00, =CA=EE=ED=FC=EA=EE=E2 =C5=E2=E3=E5=ED=E8=E9 =
wrote:

> also in r222249 next things are broken:
>=20
> 1. in net.isr.dispatch =3D deferred
>=20
> intr{swiX: netisr X} always have state 'WAIT'

Thanks for your (multiple) e-mails. I will catch up on the remainder of =
the thread tomorrow, having returned from travel today, but wanted to =
point you at "netstat -Q", which will allow you to more directly test =
what dispatch policy is being implemented. It allows you to directly =
inspect counters for directly dispatched vs. deferred packets with each =
netisr thread. Relying on sampled CPU use can be quite misleading, as =
dispatch policies can have counter-intuitive effects on performance and =
CPU use; directly monitoring the cause, rather than the effect, would be =
more reliable for debugging purposes.

Robert


>=20
>  PID USERNAME   PRI NICE   SIZE    RES STATE   C   TIME   WCPU COMMAND
>   11 root       155 ki31     0K    32K RUN     1  25:02 87.16% =
idle{idle: cpu1}
>   11 root       155 ki31     0K    32K CPU0    0  25:08 86.72% =
idle{idle: cpu0}
>   11 root       155 ki31     0K    32K CPU2    2  24:23 83.50% =
idle{idle: cpu2}
>   11 root       155 ki31     0K    32K CPU3    3  24:47 81.93% =
idle{idle: cpu3}
>   12 root       -92    -     0K   248K WAIT    3   0:59  6.54% =
intr{irq266: re0}
> 3375 root        40    0 15468K  6504K select  2   1:03  4.98% snmpd
>   12 root       -72    -     0K   248K WAIT    3   0:28  3.12% =
intr{swi1: netisr 1}
>   12 root       -60    -     0K   248K WAIT    0   0:34  1.71% =
intr{swi4: clock}
>   12 root       -72    -     0K   248K WAIT    3   0:27  1.71% =
intr{swi1: netisr 3}
>   12 root       -72    -     0K   248K WAIT    1   0:20  1.37% =
intr{swi1: netisr 0}
>    0 root       -92    0     0K   152K -       2   0:30  0.98% =
kernel{dummynet}
>   12 root       -72    -     0K   248K WAIT    3   0:13  0.88% =
intr{swi1: netisr 2}
>   13 root       -92    -     0K    32K sleep   1   0:11  0.24% =
ng_queue{ng_queue3}
>   13 root       -92    -     0K    32K sleep   1   0:11  0.10% =
ng_queue{ng_queue0}
>   13 root       -92    -     0K    32K sleep   1   0:11  0.10% =
ng_queue{ng_queue1}
>=20
>=20
> 2. There is no cpu load differences between dispatch methods. I have
> tested two: direct and deferred (see on picture)
>=20
> http://piccy.info/view3/2482121/cc6464fbe959fd65ecb5a8b94a23ec38/orig/
>=20
> 'deferred' method works same as 'direct' method!
>=20
>=20




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?737885D7-5DC2-4A0D-A5DF-4A380D035648>