Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 17 Jan 2012 19:41:13 +0200
From:      =?windows-1251?B?yu7t/Oru4iDF4uPl7ejp?= <kes-kes@yandex.ru>
To:        "Robert N. M. Watson" <rwatson@FreeBSD.org>
Cc:        freebsd-bugs@FreeBSD.org, bz@FreeBSD.org
Subject:   Re[2]: misc/164130: broken netisr initialization
Message-ID:  <154594163.20120117194113@yandex.ru>
In-Reply-To: <737885D7-5DC2-4A0D-A5DF-4A380D035648@FreeBSD.org>
References:  <201201142126.q0ELQVbZ087496@freefall.freebsd.org> <68477246.20120115000025@yandex.ru> <737885D7-5DC2-4A0D-A5DF-4A380D035648@FreeBSD.org>

next in thread | previous in thread | raw e-mail | index | archive | help
Здравствуйте, Robert.

Вы писали 17 января 2012 г., 2:09:16:


RNMW> On 14 Jan 2012, at 22:00, Коньков Евгений wrote:

>> also in r222249 next things are broken:
>> 
>> 1. in net.isr.dispatch = deferred
>> 
>> intr{swiX: netisr X} always have state 'WAIT'

RNMW> Thanks for your (multiple) e-mails. I will catch up on the
RNMW> remainder of the thread tomorrow, having returned from travel
RNMW> today, but wanted to point you at "netstat -Q", which will allow
RNMW> you to more directly test what dispatch policy is being
RNMW> implemented. It allows you to directly inspect counters for
RNMW> directly dispatched vs. deferred packets with each netisr
RNMW> thread. Relying on sampled CPU use can be quite misleading, as
RNMW> dispatch policies can have counter-intuitive effects on
RNMW> performance and CPU use; directly monitoring the cause, rather
RNMW> than the effect, would be more reliable for debugging purposes.

netstat -Q powerfull option =) thank you

# netstat -Q
Configuration:
Setting                        Current        Limit
Thread count                         4            4
Default queue limit                256        10240
Direct dispatch               disabled          n/a
Forced direct dispatch        disabled          n/a
Threads bound to CPUs         disabled          n/a

Protocols:
Name   Proto QLimit Policy Flags
ip         1   1024   flow   ---
igmp       2    256 source   ---
rtsock     3    256 source   ---
arp        7    256 source   ---
ip6       10    256   flow   ---

Workstreams:
WSID CPU   Name     Len WMark   Disp'd  HDisp'd   QDrops   Queued  Handled
   0   0   ip         0   790        0        0        0 2651251162 2651251162
   0   0   igmp       0     0        0        0        0        0        0
   0   0   rtsock     0    94        0        0        0   249165   249165
   0   0   arp        0     7        0        0        0   390148   390148
   0   0   ip6        0     4        0        0        0   116749   116749
   1   1   ip         0  1024        0        0   457475 7364624199 7364624196
   1   1   igmp       0     0        0        0        0        0        0
   1   1   rtsock     0     0        0        0        0        0        0
   1   1   arp        0    13        0        0        0   725393   725393
   1   1   ip6        0     8        0        0        0   294957   294957
   2   2   ip         0  1024        0        0     7321 4744097227 4744097226
   2   2   igmp       0     0        0        0        0        0        0
   2   2   rtsock     0     0        0        0        0        0        0
   2   2   arp        0    11        0        0        0  2057994  2057994
   2   2   ip6        0     6        0        0        0   369356   369356
   3   3   ip         1  1024        0        0 13563856 7101659355 7101659350
   3   3   igmp       0     0        0        0        0        0        0
   3   3   rtsock     0     0        0        0        0        0        0
   3   3   arp        0    10        0        0        0   281901   281901
   3   3   ip6        0     6        0        0        0   125781   125781

in case there are drops for netisr3 it is about 70% idle for netisr0
it is FreeBSD 9, 18 may 2011.

10.0-CURRENT #12 r230128
# netstat -Q
Configuration:
Setting                        Current        Limit
Thread count                         4            4
Default queue limit                256        10240
Dispatch policy               deferred          n/a
Threads bound to CPUs         disabled          n/a

Protocols:
Name   Proto QLimit Policy Dispatch Flags
ip         1   1024   flow  default   ---
igmp       2    256 source  default   ---
rtsock     3    256 source  default   ---
arp        7    256 source  default   ---
ether      9    256 source   direct   ---
ip6       10    256   flow  default   ---

Workstreams:
WSID CPU   Name     Len WMark   Disp'd  HDisp'd   QDrops   Queued  Handled
   0   0   ip         0    22        0        0        0  3946721  3946721
   0   0   igmp       0     0        0        0        0        0        0
   0   0   rtsock     0     3        0        0        0    10765    10765
   0   0   arp        0     1        0        0        0      162      162
   0   0   ether      0     0        0        0        0        0        0
   0   0   ip6        0     0        0        0        0        0        0
   1   1   ip         0    47        0        0        0 10445758 10445758
   1   1   igmp       0     0        0        0        0        0        0
   1   1   rtsock     0     0        0        0        0        0        0
   1   1   arp        0     1        0        0        0    10356    10356
   1   1   ether      0     0        0        0        0        0        0
   1   1   ip6        0     0        0        0        0        0        0
   2   2   ip         0    31        0        0        0  8141229  8141229
   2   2   igmp       0     0        0        0        0        0        0
   2   2   rtsock     0     0        0        0        0        0        0
   2   2   arp        0     1        0        0        0    32350    32350
   2   2   ether      0     0        0        0        0        0        0
   2   2   ip6        0     0        0        0        0        0        0
   3   3   ip         0    47        0        0        0 25638742 25638742
   3   3   igmp       0     0        0        0        0        0        0
   3   3   rtsock     0     0        0        0        0        0        0
   3   3   arp        0     1        0        0        0      410      410
   3   3   ether      0     0 42717014        0        0        0 42717014
   3   3   ip6        0     1        0        0        0      384      384

Loads only netisr3.
and question: ip works over ethernet. How you can distinguish ip and ether???

man netisr
....
     NETISR_POLICY_FLOW    netisr should maintain flow ordering as defined by
                           the mbuf header flow ID field.  If the protocol
                           implements nh_m2flow, then netisr will query the
                           protocol in the event that the mbuf doesn't have a
                           flow ID, falling back on source ordering.

     NETISR_POLICY_CPU     netisr will entirely delegate all work placement
                           decisions to the protocol, querying nh_m2cpuid for
                           each packet.

_FLOW: description says that cpuid discovered by flow.
_CPU: here decision to choose CPU is deligated to protocol. maybe it
will be clear to name it as: NETISR_POLICY_PROTO ???

and BIG QUESTION: why you allow to somebody (flow, proto) to make any
decisions??? That is wrong: because of bad their
implementation/decision may cause to schedule packets only to some CPU.
So one CPU will overloaded (0%idle) other will be free. (100%idle)

netisr.c(50)
> * Enforcing ordering limits the opportunity for concurrency, but maintains
> * the strong ordering requirements found in some protocols, such as TCP.
TCP do not require strong ordering requiremets!!! Maybe you mean UDP?

To get full concurency you must put new flowid to free CPU and
remember cpuid for that flow.

Just hash packetflow to then number of thrreads: net.isr.numthreads
nws_array[flowid]= hash( flowid, sourceid, ifp->if_index, source )
if( cpuload( nws_array[flowid] )>99 )
 nws_array[flowid]++;  //queue packet to other CPU

that will be just ten lines of conde instead of 50 in your case.

>>>>
Also nitice you have:
/*
 * Utility routines for protocols that implement their own mapping of flows
 * to CPUs.
 */
u_int
netisr_get_cpucount(void)
{

        return (nws_count);
}

but you do not use it! that break incapsulation.

>>>netisr_dispatch_src
I think here too many code to do decision. It must be simpler.


>>
>>  PID USERNAME   PRI NICE   SIZE    RES STATE   C   TIME   WCPU COMMAND
>>   11 root       155 ki31     0K    32K RUN     1  25:02 87.16% idle{idle: cpu1}
>>   11 root       155 ki31     0K    32K CPU0    0  25:08 86.72% idle{idle: cpu0}
>>   11 root       155 ki31     0K    32K CPU2    2  24:23 83.50% idle{idle: cpu2}
>>   11 root       155 ki31     0K    32K CPU3    3  24:47 81.93% idle{idle: cpu3}
>>   12 root       -92    -     0K   248K WAIT    3   0:59  6.54% intr{irq266: re0}
>> 3375 root        40    0 15468K  6504K select  2   1:03  4.98% snmpd
>>   12 root       -72    -     0K   248K WAIT    3   0:28  3.12% intr{swi1: netisr 1}
>>   12 root       -60    -     0K   248K WAIT    0   0:34  1.71% intr{swi4: clock}
>>   12 root       -72    -     0K   248K WAIT    3   0:27  1.71% intr{swi1: netisr 3}
>>   12 root       -72    -     0K   248K WAIT    1   0:20  1.37% intr{swi1: netisr 0}
>>    0 root       -92    0     0K   152K -       2   0:30  0.98% kernel{dummynet}
>>   12 root       -72    -     0K   248K WAIT    3   0:13  0.88% intr{swi1: netisr 2}
>>   13 root       -92    -     0K    32K sleep   1   0:11  0.24% ng_queue{ng_queue3}
>>   13 root       -92    -     0K    32K sleep   1   0:11  0.10% ng_queue{ng_queue0}
>>   13 root       -92    -     0K    32K sleep   1   0:11  0.10% ng_queue{ng_queue1}
>> 
>> 
>> 2. There is no cpu load differences between dispatch methods. I have
>> tested two: direct and deferred (see on picture)
>> 
>> http://piccy.info/view3/2482121/cc6464fbe959fd65ecb5a8b94a23ec38/orig/
>> 
>> 'deferred' method works same as 'direct' method!
>> 
>> 


Also I want to ask you: help me please where I can find documention
about scheduling netisr and full packetflow through kernel:
packetinput->kernel->packetoutput
but more description what is going on with packet while it is passing
router.

Thank you


-- 
С уважением,
 Коньков                          mailto:kes-kes@yandex.ru




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?154594163.20120117194113>