Skip to content
UoL CS Notes

Congestion Control

COMP211 Lectures

These notes are low-effort, due to catching up in this module. See the videos and slides for more detail.

Congestion

Congestion is where too many sources are sending too much data too fast for the network to handle. This can give:

  • Long Delays
  • Packet Loss

This is different from flow control where one sender is sending data too fast.

End to End Congestion Control

  • No explicit feedback from the network.
  • Congestion is inferred from observed loss and delay.
  • Approach taken from TCP.

Network Assisted Congestion Control

  • Routers provide direct feedback to sending and receiving hosts with flows passing through congested router.
  • May indicate congestion level or explicitly set a sending rate.

This is used in the: TCP ECN, ATM and DECbit protocols.

TCP Congestion Control - AMID

AMID is a distributed, asynchronous algorithm that optimised congested flow rates network wide.

Sender can increase sending rate until packet loss (congestion) occurs, then decrease the sending rate on loss event.

We do this by using the following two methods:

  • Additive Increase
    • Increase the sending rate by 1 maximum segment size every RTT until loss is detected.
  • Multiplicative Decrease
    • Cut the sending rate in half at each loss event.

This results in a saw-tooth behaviour as the protocol probes from bandwidth.

Multiplicative Decrease

In addition to the definition before the sending rate is:

  • Cut in half on loss detected by triple duplicate ACK. (TCP Reno)
  • Cut to 1 maximum segment size (MSS) when loss is detected by timeout. (TCP Tahoe)

Congestion Window

The congestion window is the number of unACKed packets that can be in flight at any one time.

The behaviour roughly is:

  1. Send cwnd bytes, wait for RTT for ACKS, then send more bytes.

    \[\text{TCP Rate} \approx\frac{\text{cwnd}}{\text{RTT}}\text{ (bytes/sec)}\]
  2. TCP sender limite transmission:

    \[\text{LastByteSent} - \text{LastByteAcked}\leq \text{cwnd}\]
  3. cwnd is dynamically adjusted in response to observed network congestion (implementing TCP congestion control).

TCP Slow Start

When connection begins, increase the rate exponentially until first loss event:

  1. Initially cwnd = 1 MSS
  2. Double cwnd every RTT.
  3. Done by incrementing cwnd for every ACK received.

To avoid congestion we can switch to linear increases in the cwnd:

  • When cwnd gets to $\frac12$ of its value before timeout.

We can do this with the following implementation:

  • Have a variable called ssthresh.
  • On loss event, ssthresh is set to $\frac12$ of cwnd just before loss event.

TCP Fairness

For a system to be fair, if $K$ TCP sessions share the same bottleneck link of bandwidth $R$, each should have an average rate of $\frac R K$.

TCP is fair as with two competing TCP sessions:

  • Additive increase gives slope of 1, as throughput increases.
  • Multiplicative decrease decreases throughput proportionally.

This results in equal bandwidth provided both clients have the same RTT.

You can open more TCP connections to the same server to make your connection “unfair”.