Csep-561-Reading-6B

BBR: Congestion-Based Congestion Control

  • The paper explains that many of the performance issues of the internet (circa 2016) can be attributed to a design choice back in the 1980s to deal with TCP congestion.
  • Specifically, treating all packet loss as congestion
  • As NIC (Network Interface Controllers) and memory chips became more performant, this equivalence started to fail.
  • Now it is quite cheap to have bigger buffer queues, orders of magnitude larger than the buffer-delay product, which causes excess delay.
  • Research endeavoring to reach an optimal operating point (max bandwidth, min delay/loss) was cut short by a discovery by Jeffrey M. Jaffe which stated that no distributed algorithm could converge to such an optimal point.
  • However, this theorem assumes ambiguity of measurements (i.e. what is the particular cause of an RTT increase). But, if we look at a connection's measurements over time, we can disambiguate with some level of confidence.
  • Hence the title of the paper: instead of congestion control based on any packet loss, they want to attempt congestion control based on actual (inferred) congestion.
  • The best part about this paper is that their solution BBR runs purely on the sender, not requiring costly changes to the network itself - again, we see that deployability is an important feature.