Network-Performance

Metrics of Network Performance

The two main metrics are

  • bandwidth: how much data can be transferred at a time, e.g. 20 Mbps
  • latency: the time it takes to send the data to its destination, e.g. 24ms
    • commonly measured in terms of round trip time (RTT)

There is a really interesting interplay between these two values. For one, different appliations will value one over the other:

  • video streaming needs bandwidth, but latency doesn't really matter (just start the stream a few seconds later)
  • a client/server exchange of small (~1-byte) messages will behave very differently if its sent across a room of 1ms latency, rather than transcontinental 100ms round trip time. But increasing 1Mbps -> 100Mbps of bandwidth has no effect here.

Delay x Bandwidth Product

The product of these two metrics is a useful value, too. Consider the network as a pipe with data flowing through it. With this perspective, the bandwidth is the diameter and the latency is the length.

Next consider a low and high speed network, one capable of 1-Mbps (DSL) and the other 1-Gbps (Fiber), both with a RTT latency of 100ms.

  • To send a 1-MB data file across the DSL network, it requires 80 "pipes-full".
  • To send a 1-MB data file across the Fiber network, it requires 1/12 pipe-full.

A consequence of this is that, relatively speaking, latency starts to matter much more in the latter setting, since each RTT can hold so much data. In the former setting, if you have to send 81 pipes instead of 80 (so incurring 81 RTTs instead of 80), you are incurring a relative 1/80 factor increase in delay. In the latter, if you have to send 2 RTTs instead of 1 RTT, you've doubled the delay.