Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 26 Jul 1999 10:10:31 -0500 (CDT)
From:      Mohit Aron <aron@cs.rice.edu>
To:        luigi@labinfo.iet.unipi.it (Luigi Rizzo)
Cc:        freebsd-net@freebsd.org, druschel@cs.rice.edu (Peter Druschel)
Subject:   Re: FreeBSD tuning for webserver performance
Message-ID:  <199907261510.KAA20768@cs.rice.edu>
In-Reply-To: <199907260328.FAA03377@labinfo.iet.unipi.it> from "Luigi Rizzo" at Jul 26, 99 05:28:24 am

next in thread | previous in thread | raw e-mail | index | archive | help

> 
> The data on SYN processing is interesting but really seems a large one!
> 
> May i ask you how did you measure them -- by instrumenting the code
> paths, or looking on a tcpdump at the delay between a SYN request
> on the wire, and the SYN|ACK response. In the latter case, was
> there any other traffic generated by the server ?
> 

I measured it in two ways - first by instrumenting the code path by using the
cycle counter. This gave values of 50 usec when the SYN was received repeatedly
(i.e. everything's in the cache) and a value of 150 usec when the SYN was
received after a period of time after doing some other processing (i.e. not
everything was in the cache). I then measured the connection overhead in a
stripped down fast webserver by first measuring the throughput by repeatedly
initiating requests on separate connections and second by issuing requests
repeatedly on the same connection. The difference in time taken per request
approximately gives the TCP connection overhead. This was closer to 150 usec.
So I assume in normal webserver processing, connection establishment takes
about 150 usec due to cache effects.


> A quick experiment locally on a K6-400 w/ 100mbit card (de)
> running tcpdump shows that there are about 40us between the ping
> request and response, and some 90-100us between the SYN and the
> corresponding ACK (this at the bpf level).
> So the SYN processing overhead appears to be more in the 50us range,
> the rest being generic packet processing overhead.
> Surely worth some optimization, though...
> 

Your numbers correspond to mine assuming the SYNs are being sent repeatedly
and no other processing is being done.


> 
> because you get an immediate notification (check tcp_output() -- an
> error generates a tcp_quench()) reducing the window to a single segment
> and avoiding further congestion and occurrence of such events (and
> you already have a full window in transit so there is no performance
> penalty for your connection -- as ACK come in the window reopens
> to its full size).
> 
> Keeping queues short is always useful -- even in the case of your
> busy web server, just to give one example, you might cause your
> SYN|ACK for new connections incur a large delay just because you
> have a huge number of data packets queued for delivery.
> 

I accept your argument but I think short driver output queues might be
useful in very limited number of cases. These are when the bottleneck in the
path to the clients occurs at the driver. I would imagine that in most cases
the bottleneck occurs at some external router (which usually have large
queues). 


> 
> unfortunately fixing TCP like you suggest, while desirable is not
> easy because there is not a fine-grained timer to use.
> 

I have a paper submitted on how to implement very fine-grained timers in an OS
with very low overhead. Rate-based pacing is one of the goals of these timers.
Hopefully it'll get published soon. :)



- Mohit


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-net" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199907261510.KAA20768>