Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 21 Jul 2000 14:04:08 -0700
From:      Ulf Zimmermann <ulf@alameda.net>
To:        Ulf Zimmermann <ulf@alameda.net>
Cc:        hackers@FreeBSD.ORG
Subject:   Re: Maybe OT, maybe not
Message-ID:  <20000721140408.L79232@PacHell.TelcoSucks.org>
In-Reply-To: <20000718152043.A18798@PacHell.TelcoSucks.org>; from ulf@alameda.net on Tue, Jul 18, 2000 at 03:20:44PM -0700
References:  <20000718152043.A18798@PacHell.TelcoSucks.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, Jul 18, 2000 at 03:20:44PM -0700, Ulf Zimmermann wrote:
> Hello,
> 
> I got a problem I need to get "solved" as fast as possible. I have here
> a firewall box (FreeBSD based, yeah!) and need to test this in conjunction
> with web crawling. Our current FW1 based Sun firewalls die very fast.
> 
> I need to emulate about 9,000 or more concurrent open tcp sessions. Each
> session should be random sized from 500 to maybe 20,000 bytes data transferred.
> In addition to these I need x amount of initiated tcp sessions which never
> get answered. FW1 with its tcp connection table will create an entry for
> these "failed" sessions and hold it up to its time out. Our crawlers do not.
> 
> So I am basicly looking for a load generator and a "server". Anyone got
> something laying around like that ?


Ok, I got some suggestions, but let me the whole thing.

What do I have to test ? I need to test the ability of the firewall, which
keeps a session table, to handle several ten thousand entries.

Why so many ? Our current crawlers will create 300 tcp session max. It will
try to start a tcp session for a destination (tcp proctocol syn packet).
The firewall will now create an entry in the session table. If the remote
site never answers (not reachable, down, etc) the crawler will time out
the tcp session after like 1 minute, but the firewall (because it never
sees another packet) will not time it out before like 5 minutes (by default
its even 30 minutes). So depending on how many dead sites we hit, we are
having about 9,000 active tcp sessions (30 crawlers at 300 sessions a 
machine) plus several thousand to tenthousands of dead entries on the
firewall.

My current test scenario seems to run out to this:

http servers (something light weight, serving files from memory, no logging)

   ||||

[firewall]

   ||||

load generator.

The direction I am kinda thinking about the load generator is something like
a main thread which spawns off maybe 2,000 threads (if I can do so many or
maybe more), each thread using maybe libfetch to generate a http request
to either a real server on the other side or by sending a request to
a non existing ip to simulate dead sites. It then needs to timeout after
lets say 30 seconds to leave the dead entry on the firewall.


-- 
Regards, Ulf.

---------------------------------------------------------------------
Ulf Zimmermann, 1525 Pacific Ave., Alameda, CA-94501, #: 510-769-2936
Alameda Networks, Inc. | http://www.Alameda.net  | Fax#: 510-521-5073


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20000721140408.L79232>