Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 10 Oct 2024 17:07:23 -0600
From:      Alan Somers <asomers@freebsd.org>
To:        FreeBSD Net <freebsd-net@freebsd.org>
Subject:   How does the TCP measurement period work?
Message-ID:  <CAOtMX2hKZy9omwHXLpKw42QwpGcUmTwLqSp=OWYYZ8cqOwwQ6w@mail.gmail.com>

next in thread | raw e-mail | index | archive | help
Can somebody please explain to me how the TCP measurement period
works?  When does h_ertt decide to take a new measurement?

Motivation:
I recently saw a long-distance connection that should've been capable
of 80+ MBps suddenly drop to < 1 MBps.  Subsequent analysis of the
pcap file showed that while the typical RTT was 16.5 ms, there were a
few spikes as high as 380ms that coincided with the drop in
throughput.  The surprising part was that even though RTT returned to
a good value, the throughput stayed low for the entire remaining
transfer, which lasted 750s.  I would've expected throughput to
recover once RTT did.  My theory is that h_ertt never made a new
measurement.  However, I cannot reproduce the problem using dummynet
on a local VM.  With dummynet, as soon as I return the RTT to normal,
the throughput quickly recovers, as one would expect.

Grateful for any insights.
-Alan



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAOtMX2hKZy9omwHXLpKw42QwpGcUmTwLqSp=OWYYZ8cqOwwQ6w>