Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 04 Apr 2004 16:07:38 -0600
From:      Brandon Erhart <berhart@ErhartGroup.COM>
To:        Chuck Swiger <cswiger@mac.com>
Cc:        freebsd-net@freebsd.org
Subject:   Re: FIN_WAIT_[1,2] and LAST_ACK
Message-ID:  <6.0.2.0.2.20040404160622.01c84428@mx1.erhartgroup.com>
In-Reply-To: <4070860F.6030701@mac.com>
References:  <6.0.2.0.2.20040404152043.01c83320@mx1.erhartgroup.com> <4070860F.6030701@mac.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Yes, it pays attention to /robots.txt.

But, I am writing my own -- I don't want to use rsync, wget, anything like 
that. This is part of an archiving project, and it uses so many FDs because 
it has tons of connections opened to DIFFERENT servers at different times 
.. not just one site.

Any advice on the timeouts? I don't really care about the RFC , honestly 
:-P. Like I said, I'm going for sheer speed.


Brandon

At 04:02 PM 4/4/2004, you wrote:
>Brandon Erhart wrote:
>>I am writing a network application that mirrors a given website (such as 
>>a suped-up "wget"). I use a lot of FDs, and was getting connect() errors 
>>when I would run out of local_ip:local_port tuples. I lowered the MSL so 
>>that TIME_WAIT would timeout very quick (yes, I know, this is "bad", but 
>>I'm going for sheer speed here), and it alleviated the problem a bit.
>>However, I have run into a new problem. I am getting a good amount of 
>>blocks stuck in FIN_WAIT_1, FIN_WAIT_2 or LAST_ACK that stick around for 
>>a long while. I have been unable to find must information on a timeout 
>>for these states.
>
>Well, these are defined in RFC-791 (aka STD-5).
>
>If you want to mirror the content of a given website rapidly, a good 
>approach would be to use a tool like rsync and duplicate the changed 
>portions at the filesystem level rather than mirroring via HTTP requests.
>
>It would also be the case that using HTTP/1.1 pipelining ought to greatly 
>reduce the number of new connections you need to open, which ought to 
>speed up your program significantly while reducing load on the servers 
>you're mirroring.
>
>Since I've given some helpful advice (or so I think :-), perhaps you'll be 
>willing to listen to a word of caution: if your client is pushing so hard 
>that it exhausts the local machine's resources, you're very probably doing 
>something that reasonable website administrators would consider to be 
>abusive and you may cause denial-of-service conditions for other users of 
>that site.
>
>Does your tool pay attention to /robots.txt?
>
>--
>-Chuck



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?6.0.2.0.2.20040404160622.01c84428>