From owner-freebsd-current Tue Jun 3 14:14:42 1997 Return-Path: Received: (from root@localhost) by hub.freebsd.org (8.8.5/8.8.5) id OAA23059 for current-outgoing; Tue, 3 Jun 1997 14:14:42 -0700 (PDT) Received: from mailhub.Stanford.EDU (mailhub.Stanford.EDU [36.21.0.128]) by hub.freebsd.org (8.8.5/8.8.5) with ESMTP id OAA23054 for ; Tue, 3 Jun 1997 14:14:40 -0700 (PDT) Received: from tree2.Stanford.EDU (tree2.Stanford.EDU [36.83.0.37]) by mailhub.Stanford.EDU (8.8.5/8.8.5/L) with SMTP id OAA19259; Tue, 3 Jun 1997 14:14:23 -0700 (PDT) Date: Tue, 3 Jun 1997 14:14:07 -0700 (PDT) From: "Amr A. Awadallah" Subject: TCP: Brief Comment on cwnd Inflation during Fast Recovery. Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII To: undisclosed-recipients:; Sender: owner-current@FreeBSD.ORG X-Loop: FreeBSD.org Precedence: bulk All, For those of you who expressed interest in our comments on the effects of cwnd inflation. We added more comments/results on this subject at this URL ( http://www-leland.stanford.edu/~aaa/tcp ). We included the patch (diff file) for the current FreeBSD tcp_input.c. We would appreciate it if interested developers provide feedback to us about this change (in terms of observed throughput). The change is only a couple of lines of code and apparently leads to higher throughput TCP sources. We also provide arguments for and against the modification. The main argument for the modification is that with the current cwnd inflation, more packets are being sent into the network during the fast recovery period at the rate at which duplicate-ACKs are coming back. Which is counter intuitive to the fact that entering fast recovery means that the TCP source just lost a packet (indicated by the duplicate ACKs), hence the source should throttle back its sending rate. By continuing to send at the rate at which duplicate ACKs are coming back, the source may force the network to drop another one of its packets (e.g. due to RED gateways [Floyd and Jacobsen, IEEE Transactions on Networking, August 1993], or simple buffer overflow). This may lead to invoking another fast recovery cycle, or worse invoking slow-start (this can be clearly seen in the cwnd vs time plots on the web page). The modification we did provides the network a breathing period of less than 1 RTT which allows the network to catch up its breath by dequeuing congested buffers. This avoids another packet loss, thus leading to a smoother cwnd vs time behavior. It still allows for packets to be sent during the fast recovery period but at a much lower rate. The main argument against the modification is that by using normal congestion avoidance cwnd-increase during the fast recovery period (rather than cwnd inflation), the source will not be able to keep the pipe full (thus violating VJ recommendations). Hence this leads to a burst of back-to-back packets at the end of the fast recovery period. We note though that schemes like FACK [Mathis and Mahdavi SIGCOMM '96] allows for the regulation of such a burst (by pacing the burst). SACK [Floyd and Fall CCR paper] also tackles this problem. We also note that this burst of back-to-back packets is known to exist in current TCP implementations (at least we observed it rather frequently in FreeBSD 2.1.6, as shown on the web). The burst simply occurs due to cwnd sliding a considerable distance when the non-duplicate ACK arrives, hence opening up lots of space for new packets to be sent. The modification leads to a more aggressive TCP source since it starts with a larger window size at the end of the fast-recovery period. It has also been pointed to us that most TCP researchers think the principles behind the current fast recovery algorithm works well. One last comment, we stumbled on the cwnd inflation spikes during our research on TCP (which is on a totally different aspect). The spikes appeared strange to us at first because in most papers on TCP congestion avoidance (at least those that we read), one would rarely see a cwnd vs time plot showing the cwnd inflation spikes. This was the main reason why we were originally misled to believe that this was a bug in TCP, until others corrected us by pointing out that this is truly how fast recovery was designed to work (that is the spikes are a feature of TCP ! ). Thanks for your interest and feedback, Sincerely, Amr A. Awadallah (aaa@stanford.edu) Chetan Rai (crai@cs.stanford.edu) ----------------------------------------------------- PS: Sorry if you receive this e-mail more than once, this means you are subscribed to too many mailing lists :)