Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 21 Apr 2003 11:24:29 +0200
From:      Borje Josefsson <bj@dc.luth.se>
To:        "Jin Guojun [NCS]" <j_guojun@lbl.gov>
Cc:        freebsd-performance@freebsd.org
Subject:   Re: patch for test (Was: tcp_output starving -- is due to mbuf  get delay?)
Message-ID:  <200304210924.h3L9OT2F032404@dc.luth.se>
In-Reply-To: Your message of Mon, 21 Apr 2003 10:27:59 %2B0200. <200304210827.h3L8Rx2F032265@dc.luth.se> 

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, 21 Apr 2003 10:27:59 +0200 Borje Josefsson wrote:

> This patch definitively works, and gives much higher PPS (32000
> instead of  19000). This is on a low-end system (PIII 900MHz with
> 33MHz bus), I'll  test one of my larger systems later today. =


OK. I have now tested on a larger system.

Result is better than without the patch, but *not* as good as (for =

example) NetBSD or Linux.

Value		Before patch	After patch	NetBSD
Mbit/sec	617		838		921
PPS (MTU=3D4470)	20000		27500		28000

The problem is (still) that I run out of CPU on the FreeBSD *sender*. Thi=
s =

doesn't happen on NetBSD (same hardware). The hardware is Xeon 2,8GHz, =

PCI-X bus, connected directly to the core routers of a 10 Gbps network. =

RTT=3D21 ms, MTU=3D4470. OS=3DFreeBSD 4.8RC with Your patch applied.

wilma % vmstat 1 (edited to shorten lines)

   memory      page                      faults      cpu
  avm    fre  flt  re  pi  po  fr  sr   in   sy  cs us sy id
 8608 977836    4   0   0   0   0   0  233   20   7  0  2 98
12192 977836    4   0   0   0   0   0  237   59  16  0  1 99
12192 977836    4   0   0   0   0   0  233   20   8  0  2 98
12636 977608   78   0   0   0   7   0 2377  870 241  0 28 72
12636 977608    4   0   0   0   0   0 6522 1834  19  0 100  0
12636 977608    4   0   0   0   0   0 6531 1816  19  0 100  0
12636 977608    4   0   0   0   0   0 6499 1827  19  0 100  0
12636 977608    4   0   0   0   0   0 6575 1821  21  0 100  0
13044 977608    6   0   0   0   0   0 6611 1825  21  0 100  0

top(1) shows:

CPU states:  0.0% user,  0.0% nice, 93.4% system,  6.6% interrupt,  0.0% =

idle
Mem: 6136K Active, 8920K Inact, 34M Wired, 64K Cache, 9600K Buf, 954M Fre=
e
Swap: 2048M Total, 2048M Free

  PID USERNAME PRI NICE  SIZE   RES STATE TIME   WCPU    CPU COMMAND
  215 root      43   0  1024K  652K RUN   0:11 92.37% 39.11% ttcp

Compare that to when I use NetBSD as sender:

CPU states:  0.0% user,  0.0% nice,  6.5% system,  5.5% interrupt, 88.1% =

idle
Memory: 39M Act, 12K Inact, 628K Wired, 2688K Exec, 5488K File, 399M Free=

Swap: 1025M Total, 1025M Free

  PID USERNAME PRI NICE   SIZE  RES STATE  TIME   WCPU    CPU COMMAND
17938 root       2    0   204K 688K netio  0:00  7.80%  1.42% ttcp
 =


The "slow ramping" effect that I described in my earlier letter is not at=
 =

all as visible here, so that might be something else (my small test syste=
m =

has some switches between itself and the core).

 bge0 in       bge0 out              total in      total out            =

 packets  errs  packets  errs colls   packets  errs  packets  errs colls
       6     0        4     0     0         7     0        4     0     0
   18364     0    12525     0     0     18364     0    12525     0     0
   27664     0    18861     0     0     27665     0    18861     0     0
   27511     0    18749     0     0     27511     0    18749     0     0
   27281     0    18572     0     0     27282     0    18572     0     0

Net result: Much better, but not as good as the "competitors"...

--B=F6rje



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200304210924.h3L9OT2F032404>