Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 28 Feb 2000 16:06:33 -0500
From:      Graeme Tait <graeme@echidna.com>
To:        Alfred Perlstein <bright@wintelcom.net>
Cc:        Jerry Preeper <preeper@cts.com>, freebsd-questions@freebsd.org
Subject:   Re: load testing a web server and network connectivity
Message-ID:  <38BAE359.FF25D8F6@echidna.com>
References:  <3.0.5.32.20000225170727.00b54430@crash.cts.com> <20000225175430.O21720@fw.wintelcom.net>

next in thread | previous in thread | raw e-mail | index | archive | help
What web server are you testing, in what environment?

I think the most important thing in trying to accurately simulate real-world web
server load is to be able to correctly simulate multiple slow connections,
assuming you are dealing with users accessing via the Internet.

Most Internet users access a web server through a limited-speed connection
(typically a modem on a dial-up). The speed of the individual connections, along
with the rate of requests and the volume of data per request together determine
the number of simultaneous open connections (after allowance for persistent
connections).

Especially with a pre-forking server like Apache, the number of simultaneous
connections is a critical parameter. Each connection established requires its
own Apache child, which is tied up (and ties memory up) until it closes that
connection. If you load test Apache from another machine on a fast LAN, throwing
the odd 100 requests per second at the server, you will hardly be stressing it
at all with typical transfer sizes. Do the same from 100 28k modems, and you
will have a very different story. If you have resource-intensive CGI processes,
this effect can be magnified.

Another consideration is memory caching of disk data. If you request a limited
repertoire of web objects in a test, they are going to all end up cached in
memory after the first accesses. Performance will be quite different from a
situation where there is more web information than can be memory-cached, and
random accesses.

There are very expensive high-end products like WebLoad that do simulate
real-world conditions. WebLoad can do replay and other scripted testing, as
referred to in another reply. I'm not aware of any free/share-ware that does all
of this.


Alfred Perlstein wrote:
> 
> * Jerry Preeper <preeper@cts.com> [000225 17:47] wrote:
> > I was wondering if anyone has run across a good tool to load test a web
> > server and connectivity.  I'd like to see how my web server stands up to a
> > load of like 5x, 10x, 20x and 50x of what I get now.  The two issues I
> > probably need to test would be connectivity to the server and then server
> > performance under load.  I'm not sure how to go about simulating the
> > connections that would probably also need to do things like run some of the
> > perl programs, mysql accesses and such to have it be a fair test... Also,
> > it would be nice if it could interpolate results to give an idea of where
> > it would it would die or be dead for real purposes and show what the
> > bottlenecks might be (ram, nic, etc..).  Any ideas?
> 
> The apache program comes with a program called 'ab' that allows you
> bench requests against a server, the only problem is that it completely
> blasts the @#$@#$ out of the server because it does requests as fast
> as they are completed, you can limit the number of concurrant requests
> though.  You also can only specify one url, however you could script
> several runs of the program hitting different images/cgis.
> 
> Anyone know any others?
> 
> -Alfred
> 
> To Unsubscribe: send mail to majordomo@FreeBSD.org
> with "unsubscribe freebsd-questions" in the body of the message



To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-questions" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?38BAE359.FF25D8F6>