Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 18 Mar 2003 20:51:29 +0100
From:      Borje Josefsson <bj@dc.luth.se>
To:        freebsd-hackers@freebsd.org
Subject:   High CPU usage on high-bandwidth long distance connections.
Message-ID:  <200303181951.h2IJpTKl001940@dc.luth.se>

next in thread | raw e-mail | index | archive | help

Hello,

Scenario:

Two hosts:

*** Host a:
CPU: Intel(R) Xeon(TM) CPU 2.80GHz (2790.96-MHz 686-class CPU)
Hyperthreading: 2 logical CPUs
real memory  =3D 1073676288 (1048512K bytes)
em0: flags=3D8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 4470
        options=3D3<rxcsum,txcsum>
        media: Ethernet autoselect (1000baseSX <full-duplex>)

*** Host b:
CPU: Intel(R) Xeon(TM) CPU 2.80GHz (2790.96-MHz 686-class CPU)
Hyperthreading: 2 logical CPUs
real memory  =3D 536301568 (523732K bytes)

bge0: flags=3D8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 4470
        options=3D3<rxcsum,txcsum>
        media: Ethernet autoselect (1000baseSX <full-duplex>)

Both Ethernet cards are PCI-X.
 =

Parameters (for both hosts):

kern.ipc.maxsockbuf=3D8388608
net.inet.tcp.rfc1323=3D1
kern.ipc.nmbclusters=3D"8192"

The hosts are connected directly (no LAN equipment inbetween) to high =

capacity backbone routers (10 Gbit/sec backbone), and are approx 1000 =

km/625 miles(!) apart. Measuring RTT gives:
RTTmax =3D 20.64 ms. Buffer size needed =3D 3.69 Mbytes, so I add 25% and=
 set:

sysctl net.inet.tcp.sendspace=3D4836562 =

sysctl net.inet.tcp.recvspace=3D4836562

MTU=3D4470 all the way.

OS =3D FreeBSD 4-STABLE (as of today).

**** Now the problem:

The receiver works fine, but on the *sender* I run out if CPU (doesn't =

matter if host a or host b is sender). Measuring bandwidth with ttcp give=
s:

ttcp-t: buflen=3D61440, nbuf=3D30517, align=3D16384/0, port=3D5001  tcp
ttcp-t: 1874964480 bytes in 22.39 real seconds =3D 638.82 Mbit/sec +++
ttcp-t: 30517 I/O calls, msec/call =3D 0.75, calls/sec =3D 1362.82
ttcp-t: 0.0user 20.8sys 0:22real 93% 16i+382d 326maxrss 0+15pf 9+280csw

This is very repeatable (within a few %), and is the same regardless of =

which direction I use.

During that period, the sender shows:

0.0% user,  0.0% nice, 94.6% system,  5.4% interrupt,  0.0% idle

I have read about DEVICE_POLLING, but that doesn't seem to be supported o=
n =

any GigE PCI-X cards?!?

Does anybody have an idea on which knob to tune next to be able to fill m=
y =

(long-distance) GigE link? I am mostly interested in what to do to not ea=
t =

all my CPU, but also if there are anu other TCP parameters that I haven't=
 =

thought about.

I have configured my kernel for SMP (Xeon CPU with hyperthreading), don't=
 =

know if that is good or bad in this case?

With kind regards,

--Borje



To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200303181951.h2IJpTKl001940>