Monday, September 8, 2008

2008-09-09 Reading - End to End Congestion Control


ANALYSIS OF THE INCREASE AND DECREASE ALGORITHMS FOR CONGESTION AVOIDANCE COMPUTER NETWORKS

Dah-Ming Chiu and Raj Jain

Summary: Analysis various increase/decrease algorithms possible for communication protocols for computer networks. Proves mathematically that additive increase and multiplicative decrease achieves both efficiency and fairness. Also considered different combo of additive/multiplicative increase/decreases. Proof offered both algebraically and graphically. Graphical proof very elegant and convincing (Figs 4, 5, 6). Extensible to many users - graph in multi-dimensions now. Outlines convergence constraints for both efficiency and fairness. Proves that additive increase/multiplicative decrease gives the optimal convergence behavior. Outlines why nonlinear controls less attractive.

Background required: Basic math. Some basic system/network theory?

Discussion points: I wonder know this kind of knowledge is disseminated. These guys seem to come from a different community than the guys who wrote the other paper. One very much theory oriented, the other implementation oriented. Did they discover the additive increase/multiplicative decrease thing separately? Perhaps we should lend a patient ear to the latest theory to come out of Cory Hall? Maybe become conversant in the "network theory" language in the first place?


CONGESTION AVOIDANCE AND CONTROL

Van Jacobson and Michael J. Karels

Summary: Offer justification and experimental verification of several mechanisms in congestion control that has become the standard since then. Include slow start - self clocking and much better than additive increase during congestion avoidance. Estimate RRT - basically a low pass filter with filter grain constant - offers no justification for why a value of 0.9 is chosen (I recall from other papers than the parameter needs tuning). Exponential backoff on retransmissions - reach stability faster (this is not standard implementation these days?). Outlines additive increase/multiplicative decrease in a different way. Includes pretty convincing comparative data. Includes in appendices details of algorithm implementation.

Background required: Basic TCP.

Discussion points: As noted in previous paper. A potential criticism is the lack of description of their data gathering setup. If we want to replicate their experiment we would have to contact them for details. Come to think of it ... given their mechanisms have mostly become the standard, how WOULD we replicate their experiment today?

We could possibly include as optional reading the paper "Simulation-based Comparisons of Tahoe, Reno, and SACK TCP" by Kevin Fall and Sally Floyd. That paper basically outlines the next chapter in the congestion story - Reno, New Reno, SACK.


No comments: