Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 10 Jan 2015 20:36:10 +0600
From:      info@aknet.kg
To:        <freebsd-net@freebsd.org>
Subject:   Netmap-Ipfw: dramatic perfomance degrade after certain load and ruleset insertions
Message-ID:  <bb866811ca75ca36357a4bc8bb7ff6a7@aknet.kg>

next in thread | raw e-mail | index | archive | help
Hello, All (NetMap dev. Team) !

We use netmap-ipfw server for traffic pre-processing before the main 
filtering bridge (ordinary dummynet) and met the situation, that 
netmap-ipfw dramatically degrades perfomance (point B below) from stable 
work (point A below) after some actions with ruleset.

A.  Server with netmap-ipfw shows very stable work with following 
settings:

1. kipfw starts as ./kipfw netmap:ix0 netmap:ix1
2. current statement of sysctl variables:
...
dev.netmap.ring_size: 128000
dev.netmap.ring_curr_size: 128000
dev.netmap.buf_curr_size: 4096
dev.netmap.buf_num: 896000
dev.netmap.buf_curr_num: 896000
dev.netmap.ix_rx_miss: 2020539039
dev.netmap.ix_rx_miss_bufs: 1478979631
....
in /boot/loader.conf :
hw.ix.rxd=4096
hw.ix.txd=4096

3. traffic at one of interfaces (ix1):

root@bridge-netmap:/usr/local/netmap-ipfw/ipfw # netstat -bdh -w1 -I 
ix1
             input            ix1           output
    packets  errs idrops      bytes    packets  errs      bytes colls 
drops
       628K     0     0       737M       517K     0       106M     0     
0
       636K     0     0       749M       522K     0       105M     0     
0
       633K     0     0       744M       527K     0       105M     0     
0
       635K     0     0       744M       523K     0       107M     0     
0

4. system load:

CPU 0: 58.7% user,  0.0% nice, 33.5% system,  3.9% interrupt,  3.9% 
idle
CPU 1:  0.0% user,  0.0% nice,  0.0% system,  3.9% interrupt, 96.1% 
idle
CPU 2:  0.0% user,  0.0% nice,  0.0% system,  3.5% interrupt, 96.5% 
idle
CPU 3:  0.0% user,  0.0% nice,  0.4% system,  2.8% interrupt, 96.9% 
idle
Mem: 22M Active, 243M Inact, 4125M Wired, 814M Buf, 3429M Free
Swap: 4096M Total, 4096M Free

   PID USERNAME   PRI NICE   SIZE    RES STATE   C   TIME    WCPU 
COMMAND
   853 root       101    0  7087M   415M CPU0    0  17.5H  93.90% kipfw

Sometimes kipfw takes all 100% of the first core (20-40 sec) but 
without any affect to traffic volume.

B. Unstable work of the server begins when we add such rules to the 
ruleset:
./ipfw pipe 10 config mask dst-ip 0xffffffff bw 5120Kbit/s
./ipfw add pipe 10 ip from any to 192.168.0.0/16

by these rules we assign the same bandwidth to each IP's from 
192.168.0.0/16 (approx 20K in a whole)

After entering rules the server some minutes works good, and we see 
that in this time RES figure grows (not fast) from 415M to 500M, and 
than RES jumps from about 510M to 540M
and network traffic downs at least twice, kipfw takes 100%, large 
packet drops (up to 30%) and delay grows.

root@bridge-netmap:/usr/local/etc/rc.d # netstat -bdh -w1 -I ix1
             input            ix1           output
    packets  errs idrops      bytes    packets  errs      bytes colls 
drops
       648K     0     0       762M       534K     0       110M     0     
0
       643K     0     0       755M       529K     0       111M     0     
0
       648K     0     0       763M       534K     0       112M     0     
0
       648K     0     0       760M       537K     0       113M     0     
0
       215K     0     0       240M       211K     0        35M     0     
0    !!!! (rules were entered into ruleset)
       220K     0     0       235M       210K     0        34M     0     
0
       212K     0     0       236M       211K     0        35M     0     
0
       215K     0     0       240M       212K     0        35M     0     
0

May be there is any limit (for example 512Mb) of mem usage ? If yes - 
can it be increased ?

We can give any additional information if it be useful for 
investigating of this issue.

Azamat
IT Dep
AkNet ISP



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?bb866811ca75ca36357a4bc8bb7ff6a7>