Date: Sun, 21 Jul 2002 15:22:55 +0200 From: Andre Oppermann <oppermann@pipeline.ch> To: Matthew Dillon <dillon@apollo.backplane.com> Cc: freebsd-hackers@freebsd.org, freebsd-net@freebsd.org Subject: Re: Another go at bandwidth delay product pipeline limiting for TCP Message-ID: <3D3AB5AF.F2F637C3@pipeline.ch> References: <200207200103.g6K135Ap081155@apollo.backplane.com>
next in thread | previous in thread | raw e-mail | index | archive | help
Matthew Dillon wrote: > > Ok, I am having another go at trying to implement a bandwidth > delay product calculation to limit the number of inflight packets. > > The idea behind this feature is two fold: > > (1) If you have huge TCP buffers and there is no packet loss our > TCP stack will happily build up potentially hundreds of outgoing > packets even though most of them just sit in the interface queue > (or, worse, in your router's interface queue!). > > (2) If you have a bandwidth constriction, such as a modem, this feature > attempts to place only as many packets in the pipeline as is necessary > to fill the pipeline, which means that you can type in one window > and send large amounts of data (scp, ftp) in another. If I read the code this is per TCP session. So this would also help in cases where a server with a really good connection has lots of slow (modem/DSL) clients? -- Andre > Note that this is a transmitter-side solution, not a receiver-side > solution. This will not help your typing if you are downloading a > lot of stuff and the remote end builds up a lot of packets on your > ISP's router. Theoretically we should be able to also restrict the > window we advertise but that is a much more difficult problem. > > This code is highly experimental and so the SYSCTL's are setup for > debugging (and it is disabled by default). I'm sure a lot of tuning can > be done. The sysctl's are as follows: > > net.inet.tcp.inflight_enable default off (0) > net.inet.tcp.inflight_debug default on (1) > net.inet.tcp.inflight_min default 1024 > net.inet.tcp.inflight_max default seriously large number > > Under normal operating conditions the min default would usually be > at least 4096. For debugging it is useful to allow it to be 1024. > Note that the code will not internally allow the inflight size to > drop under 2 * maxseg (two segments). > > This code calculates the bandwidth delay product and artifically > closes the transmit window to that value. The bandwidth delay product > for the purposes of transmit window calculation is: > > bytes_in_flight = end_to_end_bandwidth * srtt > > Examples: > > Transport Bandwidth Ping Bandwidth Delay product > (-s 1440) > GigE 100 MBytes/sec 1.00 ms 100000 bytes > 100BaseTX 10 MBytes/sec 0.65 ms 6500 bytes > 10BaseT 1 MByte/sec 1.00 ms 1000 bytes > T1 170 KBytes/sec 5.00 ms 850 bytes > DSL 120 KBytes/sec 20.00 ms 2400 bytes > ISDN 14 KBytes/sec 40.00 ms 560 bytes > 56K modem 5.6 KBytes/sec 120 ms 672 bytes > Slow client 50 KBytes/sec 200 ms 10000 bytes > > Now lets say you have a TCP send buffer of 128K and the remote end has a > receive buffer of 128K, and window scaling works. On a 100BaseTX > connection with no packet loss your TCP sender will queue up to > 91 packets to the interface even though it only really needs to queue > up 5 packets. With net.inet.tcp.inflight_enable turned on, the TCP > sender will only queue up 4 packets. On the GigE link which > actually needs 69 packets in flight, 69 packets will be queued up. > > That's what this code is supposed to do. This is my second attempt. > I tried this last year too but it was too messy. But this time I > think I've got it down to where it isn't as messy. > > -Matt > Matthew Dillon > <dillon@backplane.com> To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-net" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3D3AB5AF.F2F637C3>