Date: Fri, 5 May 1995 17:21:45 +0800 (CST) From: Brian Tao <taob@gate.sinica.edu.tw> To: FREEBSD-QUESTIONS-L <freebsd-questions@FreeBSD.org> Subject: Heavy HTTPD serving on 2.0-950412 Message-ID: <Pine.BSI.3.91.950505165249.8963U-100000@aries.ibms.sinica.edu.tw>
next in thread | raw e-mail | index | archive | help
As some of you may have seen on comp.unix.bsd.freebsd.misc and
elsewhere, I've been pounding on Apache 0.62 and my FreeBSD box to see
how well it holds up as an extremely loaded Web server. The full
results are at http://140.109.40.248/~taob/fbsd-apache.html for those
interested. I tried a few more tests since then, and a couple of
questions have popped up.
One is the "mb_map_full" problem. I *know* I've seen this
question before, but I didn't save it, and now I need to know how to
increase the number of buffers (if in fact that is the problem). I
get this message when trying to max out the number of simultaneous
HTTP requests. The script I run contains the following line repeated
100 times:
lynx -source -dump http://140.109.40.248/Bench/test.html > /dev/null &
Yes, it is horribly inefficient to use Lynx in this capacity, but
that's besides the point. ;-) If I run this script from a Sparc20 on
the local Ethernet net, the load on my server will jump, but
eventually all the requests are served (test.html is a fairly small
file, only a few K in length). However, if I try this from a machine
in Toronto (20 hops away, through several gateways, slowest link being
the 256kbps trans-Pacific link), not a minute passes before I receive
"mb_map full" kernel messages. Not only that, but I immediately lose
all contact with the network, and I have to reboot to recover. It's
like someone unplugged my Ethernet cable. Why would this happen in
one case but not in the other?
As my testing continued, I ran into another problem with the
netstat display:
Active Internet connections
Proto Recv-Q Send-Q Local Address Foreign Address (state)
tcp 0 0 aries.8080 gate.sinica.edu..60698 TIME_WAIT
tcp 0 0 aries.8080 ibms.4470 TIME_WAIT
tcp 0 0 aries.8080 ibms.4469 TIME_WAIT
tcp 0 0 aries.8080 gate.sinica.edu..60697 TIME_WAIT
tcp 0 0 aries.8080 ibms.4467 TIME_WAIT
tcp 0 0 aries.8080 gate.sinica.edu..60696 TIME_WAIT
tcp 0 0 aries.8080 ibms.4466 TIME_WAIT
tcp 0 0 aries.8080 gate.sinica.edu..60695 TIME_WAIT
tcp 0 0 aries.8080 ibms.4465 TIME_WAIT
tcp 0 0 aries.8080 gate.sinica.edu..60691 TIME_WAIT
tcp 0 0 aries.8080 ibms.4464 TIME_WAIT
tcp 0 0 aries.8080 gate.sinica.edu..60690 TIME_WAIT
tcp 0 0 aries.8080 ibms.4463 TIME_WAIT
tcp 0 0 aries.8080 gate.sinica.edu..60689 TIME_WAIT
tcp 0 0 aries.8080 ibms.4462 TIME_WAIT
tcp 0 0 aries.8080 gate.sinica.edu..60687 TIME_WAIT
tcp 0 0 aries.8080 ibms.4461 TIME_WAIT
tcp 0 0 aries.8080 gate.sinica.edu..60683 TIME_WAIT
tcp 0 0 aries.8080 ibms.4460 TIME_WAIT
netstat: kvm_read: kvm_read: Bad address
tcp 0 0 aries.8080 gate.sinica.edu..60682 TIME_WAIT
netstat: kvm_read: kvm_read: Bad address
netstat: kvm_read: kvm_read: Bad address
tcp 0 0 14.0.115.3.5888 32.0.0.0.* TIME_WAIT
netstat: kvm_read: kvm_read: Bad address
netstat: kvm_read: kvm_read: Bad address
tcp 0 0 76.1.5.0.5888 68.0.0.0.* TIME_WAIT
netstat: kvm_read: kvm_read: Bad address
netstat: kvm_read: kvm_read: Bad address
tcp 0 0 76.1.5.0.5888 88.0.0.0.* TIME_WAIT
netstat: kvm_read: kvm_read: Bad address
netstat: kvm_read: kvm_read: Bad address
tcp 0 0 76.1.5.0.5888 100.0.0.0.* TIME_WAIT
netstat: kvm_read: kvm_read: Bad address
netstat: kvm_read: kvm_read: Bad address
tcp 0 0 14.0.126.3.5888 32.0.0.0.* TIME_WAIT
netstat: kvm_read: kvm_read: Bad address
netstat: kvm_read: kvm_read: Bad address
tcp 0 0 136.75.127.240.5888 144.254.114.240.* TIME_WAIT
netstat: kvm_read: kvm_read: Bad address
netstat: kvm_read: kvm_read: Bad address
tcp 0 0 8.16.128.240.18765 144.254.114.240.* TIME_WAIT
netstat: kvm_read: kvm_read: Bad address
netstat: kvm_read: kvm_read: Bad address
tcp 0 0 136.162.117.240.5888 16.51.121.240.* TIME_WAIT
netstat: kvm_read: kvm_read: Bad address
netstat: kvm_read: kvm_read: Bad address
tcp 0 0 136.115.122.240.5888 144.250.118.240.* TIME_WAIT
netstat: kvm_read: kvm_read: Bad address
netstat: kvm_read: kvm_read: Bad address
tcp 0 0 76.1.5.0.5888 csc-net.csc.com.* TIME_WAIT
netstat: kvm_read: kvm_read: Bad address
netstat: kvm_read: kvm_read: Bad address
tcp 0 0 0.56.88.240.5888 72.1.0.0.60666 TIME_WAIT
netstat: kvm_read: kvm_read: Bad address
netstat: kvm_read: kvm_read: Bad address
tcp 0 0 14.0.14.4.5888 32.0.0.0.* TIME_WAIT
netstat: kvm_read: kvm_read: Bad address
netstat: kvm_read: kvm_read: Bad address
tcp 0 0 43.0.1.0.1024 4.0.0.0.* TIME_WAIT
netstat: kvm_read: kvm_read: Bad address
netstat: kvm_read: kvm_read: Bad address
tcp 0 0 1.1.91.1.* 32.0.0.0.* TIME_WAIT
netstat: kvm_read: kvm_read: Bad address
netstat: kvm_read: kvm_read: Bad address
tcp 0 0 0.112.85.240.1280 40.0.0.0.60666 TIME_WAIT
netstat: kvm_read: kvm_read: Bad address
^C
Both gate and ibms are two Sparcs running the clients, but what
about all those other entries that are flanked by "kvm_read:..."
errors? Running netstat a second time shows a normal output without
those bogus IP addresses. The two Suns had been sending one HTTP
request every five seconds to port 8080 for the past 10 hours (my real
Web server was still running on port 80). I came into the office in
the morning to check on it, and that's when I pulled up netstat.
--
Brian ("Though this be madness, yet there is method in't") Tao
taob@gate.sinica.edu.tw <-- work ........ play --> taob@io.org
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.BSI.3.91.950505165249.8963U-100000>
