Monday, September 29, 2008

2008-09-30 Readings - Wireless Networks Overview and Architectures


Per suggestions in class, more succinctness to help facilitate peer discussion.

MACAW: A MEDIA ACCESS PROTOCOL FOR WIRELESS LAN's

Vaduvur Bharghavan, Alan Demers, Scott Shenker, Lixia Zhang

Summary:

Talks about some pretty generic problems in wireless networks. Happened to be in the context of some network with a five letter acronym. Hidden & exposed terminal problem in vanilla CSMA. RTS & CTS interaction. Inherent unfairness and starvation of "normal" exponential back-off, pretty hacky solution (can't think any better alternative ...). Multiple stream buffers. Need for ACK, DS, RRTS messages. Multicast problems. Need for per-destination backoff. Some issues for pathological topologies unresolved.

Background:

You'd better know your basic wireless CSMA/CA stuff and maybe see some of the RTS/CTS or exposed/hidden terminal problems before. Else it's gonna be tough going like the router papers. The pictures helped though.

Discussions & Criticisms:


Review of state of the art in 802.11 stuff? How much of the paper got into the current wireless protocols?

Mobility not touched - so results valid for mobile hosts. Moving hosts? Hosts on Eurotunnel type trains? Relativistic speeds in space not allowed :)

Scalability?

If there is a shortcoming of the paper is that the simulation is not described in any depth. Thus fails the "reproduceable" criteria for experiments? Too few papers outlines their methodology in a truly reproduceable fashion anyways ...




A COMPARISON OF MECHANISMS FOR IMPROVING TCP PERFORMANCE OVER WIRELESS LINKS

Hari Balakrishnan, Venkata N. Padmanabhan, Srinivasan Seshan and Randy H. Katz

Summary:

Comparative study of several techniques for handling wireless link layer losses. Including link layer retransmit mechanisms, both TCP aware ones (snoop) and not aware ones. End-to-end TCP loss recovery, like vanilla TCP Reno, TCP New Reno, TCP SACK, SMART TCP (cummulative ACK, and seq no. of pkt that caused receiver to generate the ACK), TCP Explicit Loss Notification (TCP ELN), and TCP ELN with retransmit on every duplicate ACK. Also looked at split-connection with TCP proxy and split-connection with TCP SMART.

Bottom line - link layer TCP aware stuff with SMART is best.

Background:

Most of the required background is already provided in the paper - just need to know what the techniques are and such. The snoop paper is worth a read though. Just because it also has Randy's name on it and this paper basically says snoop rocks.

Discussions & Criticisms:


Paper claims exponential distribution error model give results that are suggestive for other error models. Why is this the case? I think one of Stoica or Shenker or Paxson's early papers provided a good justification re why exponentail arrival/loss model is fairly general, but I forgot the reason. Anyone remember why?

Another methodology question. How do we ensure we get only controlled, artificially inserted errors and not "real" bit errors coming from the wireless link?

How much of this is just a glorified measurement and comparison of the snoop protocol with other stuff? Much of the result saz basically snoop rules.

Follow up results with regard to the temporal model of wireless loss?

Changing error rate results showed each solution has critical error rate above which throughput dramatically decreases, as marked by inflection points in Fig. 12. Would be interesting to plot the vertical axis in % of achievable theoretical throughput, given the error rate, and horizontal axis extend to the left for even higher error rates. Basically vanilla TCP Reno starts sucking majorly at very low error rates.

Fundamental question re Nyquist limit - AT&T WaveLAN at 915MHz means max theoretical data rate of ~2Gbps. 802.11 is at 3.7GHz (?). Hence 7.4Gbps max theoretical data rate. So future wireless tech fundamentally limited in speed? Ethernet is moving to 10Gbps soon, and 100Gbps Ethernet in the works. Would be weird to have dramatic speed mismatch. Whole new cycle of panic -> research -> more panic -> more research? Or we solve problem with CDMA or some ingenius crap like that ...



Tuesday, September 23, 2008

2008-09-23 Readings - Routers


Did this NOT get published on time .... ?!?!?!?

Dude saz saved not published >.< style="font-weight: bold;">SCALING INTERNET ROUTERS USING OPTICS

Isaac Keslassy, ShangTse Chuang, Kyoungsik Yu, David Miller, Mark Horowitz, Olav Solgaard, ick McKeown

Summary: Talks about the architecture needed to build a 100Tbps router. Has order 10Gbps line cards. Proposes a setup where data passes through initial queues, virtual output queues, optical switching back plane, output queues. Main contribution is an argument about how the traditional cross-bar architecture just won't scale. "Speed independent" wave grating array thing really amazing - switching speed limit is no longer RC delay for buffers/lines, but Nyquist limit for modulating a light wave!

Background:
Need basic router design knowledge. At least passing familiarity with cross-bars, head-of-line blocking, etc. Also some ideas about transistor circuits and optical stuff (like WDM) would be helpful.

Discussion & Criticism:
Wouldn't it be awesome if the entire backbone is optical? Like optical end-to-end. With optical slow light buffers, optical switches, optical CPU (?!?!?), and electrical-optical transducers only at the end hosts.

The paper is a bit ... dense ... Had to trust that they did their due diligence with regard to proofs etc. Their "intuitive" explanations are not all that intuitive. Do router designers need to know all the stuff re transistors and optics? Really low level!!!!

Would be nice to have a photo of these things, with the line cards, cross bars etc. identified. Would be even more awesome to have a labeled photo showing a line card with all the subcomponents, like buffers etc.

This paper should be re-read after reading the 2nd paper.


A FAST SWITCHED BACKPLANE FOR A GIGABIT ROUTER

Nick McKeown

Summary: Talks about how to build a 10Gbps router. Marks the watershed from shared-bus routers to cross-bar routers (?)

Background:
Need basic router design knowledge. At least passing familiarity with cross-bars, head-of-line blocking, etc. Also some ideas about transistor circuits and optical stuff (like WDM) would be helpful.

Discussion & Criticism:
Wouldn't it be awesome if the entire backbone is optical? Like optical end-to-end. With optical slow light buffers, optical switches, optical CPU (?!?!?), and electrical-optical transducers only at the end hosts.

The paper is a bit ... dense ... Had to trust that they did their due diligence with regard to proofs etc. Their "intuitive" explanations are not all that intuitive. Do router designers need to know all the stuff re transistors and optics? Really low level!!!!

Would be nice to have a photo of these things, with the line cards, cross bars etc. identified. Would be even more awesome to have a labeled photo showing a line card with all the subcomponents, like buffers etc.

This paper should be re-read after reading the 2nd paper.


Sunday, September 21, 2008

2008-09-18 Readings - Quality of Service


Written post-facto due to network prelim ...

FUNDAMENTAL DESIGN ISSUES FOR THE FUTURE INTERNET

Scott Shenker

Summary: Really helpful paper that frames many issues concerning the whole QoS business. Offers a game-theoretic, utility-maximization view of the goal of network design. Argues that global utility will be increased by introducing different services in the internet. Acknowledges that the increase in global utility may come at the cost of decrease in individual utility. Suggests that applications could receive services implicitly assigned by the network, or they could explicitly request services from the network. Pros and cons exist for each, no clear resolution, need pricing mechanisms to align incentives. Uses application utility/bandwidth profiles to argue for admission control. Over-provisioning seen as less cost-effect alternative.

Background: None really, just need to appreciate the network basics.

Discussion & Criticism: The paper already identified some of the potential issues. The application utility-bandwidth profiles would be hard to characterize in detail, but the general classes outlined in the paper are sufficient. However, the game theoretic argument may fall apart due to incentive mis-alignments. The biggest concern is that the global increase in utility should come at the cost of decreased utility for some users. Without any mechanism for utility transfer, there would be no incentives for the disadvantaged users to cooperate and switch to the global optimal. It's hard to imagine what utility transfer mechanisms can be usefully put in place. In business/economy, utility transfer is generally facilitated by a transfer of money. The mirror mechanism does not exist in the Internet.

Perhaps a more fundamental solution is a new architecture with better accounting abilities. Recall from the original DARPA design that accounting was of low priority.



SUPPORTING REAL-TIME APPLICATIONS IN AN INTEGRATED SERVICES PACKET NETWORK: ARCHITECTURE AND MECHANISM

David D. Clark, Scott Shenker, Lixia Zhang

Summary: Describes the architecture and mechanism of what became the Intserv standards. Has nice taxonomy of internet applications vis-a-vis their delay and adaptation requirements. Outlines to token or leaky bucket mechanism to ensure provable delay and guaranteed service. It's actually token bucket + weighted fair queueing that does the trick. Scheduling employs ideas like spreading and transferring the jitter. Hinted at the need for pricing mechanisms to support. Outlines general issues associated with the need for admission control.

Background: Network basics.

Discussion & Criticism: The paper uses a totally over-elaborate and round about way to explain the token bucket mechanism. I guess second pass explanation would always be more succinct. But haven read a succinct explanation before, I really was quite struck by the complexity of their treatment. Mathematics quite elaborate but not a problem - they're presumably checked by Parekh and Galleger. The handshake for establishing the level of needed services was deferred to a late work. Scott said QoS has been a waste of time for him ... well what can we say ... Business + practicality > technology? Or maybe another warning sign that technology done without an eye to implementation practicality will inevitably fail? Scott is awesome for butchering his own work though.



Tuesday, September 16, 2008

2008-09-16 Readings - Router Congestion Control


RANDOM EARLY DETECTION GATEWAYS FOR CONGESTION AVOIDANCE

Sally Floyd & Van Jacobson

Summary:
Outlines Random Early Detection (RED), an active queue management scheme that works with TCP to ensure better congestion avoidance. Basic idea is to keep queue sizes low while accommodating bursts in traffic. Does so by probabilistically marking packets when queue length exceeds a threshold. Presumably TCP senders/receivers respond to marked packets by adjusting their congestion windows. Contains detailed math regarding implementation and how to adaptively estimate various parameters. Has several desirable properties - flows get notified about congestion before packet drop, flows with larger share of bandwidth get more marked packets, avoids global synchronization where all flows reduce window at the same time upon congestion, no bias against bursty traffic, etc.

Background required: Need to know basic TCP. The paper itself provides overview of other needed material like ERD, DECbit, etc.

Discussion & Criticism: Did I miss this in the paper or did they not specify the details of how the TCP end hosts respond to market packets? The marked packets continue forward to destination, requiring the destination to send marked ACKs back to the sender, so that the sender may reduce its windows. Is this how it works or do they have explicit "marked" packets going back from router to sender directly? Do they mark packets by dropping them?!?!?

Also, much of their data proves only that RED actually works and does what it's supposed to do. The comparative data with drop tail gateways are not overwhelmingly convincing. Figure 5 comparing average queue size show RED is definitely better - but what's the significance? i.e. queue length of 70 is how much slower than queue length of 20? Also, does data from Fig. 5. conflict with data in Fig. 12-14 with regard to queue length and utilization? Seems like simulation results are very topology dependent. Also, it seems RED has same efficiency as drop tail and random drop schemes, per Fig. 12-14. The fairness for bursty traffic argument is dubious - one could say bursty traffic should get incentive in the routing layer to smooth out their behavior. Also not sure Fig. 16. tells what the authors claim it tells ...

Overall not clear how much better RED is compared with drop tail or other schemes. The other paper shows that RED sucks in certain conditions.


CONGESTION CONTROL FOR HIGH BANDWIDTH-DELAY PRODUCT NETWORKS

Dina Katabi, Mark Handley, Charlie Rohrs


Summary: Outlines XCP, a new transport level protocol that works well in both "conventional" environments and environments with high bandwidth-product links. XCP relies on XCP routers in the network to give explicit feedback regarding the state on congestion. The main mechanism is a congestion header where the sender indicates congestion window and RTT, and one or more routers may modify a feedback field. The feedback is a fine-grained indication of congestion (unlike binary indication - lost packet or no lost packet), and is mirrored from the receiver back to the sender. Contains detailed math from control theory to prove stability. Comparative data with other queue management schemes are reasonably convincing.

Background: Basic TCP. Paper contains overview of other required material like RED AVQ, CSFQ etc. (CSFQ is a previous reading). Did they confuse RED with REM?

Discusssion & Criticism:
One should feel it highly unconventional to have a transport protocol that totally rely on routing infrastructure. This is a fundamental shift in design philosophy. It's not clear that the benefits of XCP (relatively clearly demonstrated) is worth giving up the ability to change transport protocols without modifying the underlying routing infrastructure.

Their claim regarding the security of the protocol sounds like total rubbish. The protocol protects against the particular kind of misbehaving flows that they identified - i.e. a fire-hose flow blasting away without regard to congestion control. But, they're missing the case where someone spoofs as a router and send packets with spurious congestion headers, telling everyone else to decrease their window, and the hacker's flow to increase window. The congestion info is sent in clear text, and there's no way to verify that the routers and not some hacker modified the markings. If we're to have active networks then we've got to make sure people can't abuse the network.




Thursday, September 11, 2008

2008-09-11 Readings - Fair Queueing


ANALYSIS AND SIMULATION OF A FAIR QUEUEING ALGORITHM

Alan Demers, Srinivasan Keshav, Scott Shenker


CORE-STATELESS FAIR QUEUEING: ACHIEVING APPROXIMATELY FAIR BANDWIDTH ALLOCATION IN HIGH SPEED NETWORKS


Ion Stoica, Scott Shenker, Hui Zhang


I'll talk about both papers together. I'm not sure what to say about these papers. I had something that was at the same time a very strong reaction and no reaction at all. This confusion is due to several reasons. First, I'm not sure about the importance of the topic. Is it the case that FIFO is still by far the predominant queueing mechanism, and it has more or less worked so far? Second, I'm not sure if the need for fair queueing has been sufficiently motivated. The unresponsive media applications argument seems to move towards vague QoS regions ... The uncooperative flows motivation seems to be the only convincing one ... I'll look at readings from Vern Paxson's web security class to see how they deal with that. Third, both papers have so many different results from so many different angles that the only bottom line I could extract is fair queueing - Ok it's better than FIFO, but how much better? Depends a great deal on topology/assumptions/traffic. And all the other fair queueing mechanisms in their comparative studies have their own trade offs as well. So it's not clear how much fair queueing would buy you at what cost. Forth, the CSFQ and presumable other flow-based FQ mechanisms rely on a way to label flows, which rely on a way to define flows. Both papers have stated the difficulty of a good definition for flows, and the lack of a way to identify and label flows. So shouldn't flow definition/identification/labeling be a pre-requisite study? Lastly, I can think of a whole host of reasons that ISPs would refuse to adopt fair queueing for non-technology concerns. So is the whole topic a field of research that did not get used, get developed with insufficient motivation, has unclear outcomes, depend on non-existent technology, and wouldn't get adopted in real life anyway even if it works? I'm hesitant to think Ion and Scott would do research with very little to gain ...

Hence, very much look forward to class discussion to clear my thoughts.


Monday, September 8, 2008

2008-09-09 Reading - End to End Congestion Control


ANALYSIS OF THE INCREASE AND DECREASE ALGORITHMS FOR CONGESTION AVOIDANCE COMPUTER NETWORKS

Dah-Ming Chiu and Raj Jain

Summary: Analysis various increase/decrease algorithms possible for communication protocols for computer networks. Proves mathematically that additive increase and multiplicative decrease achieves both efficiency and fairness. Also considered different combo of additive/multiplicative increase/decreases. Proof offered both algebraically and graphically. Graphical proof very elegant and convincing (Figs 4, 5, 6). Extensible to many users - graph in multi-dimensions now. Outlines convergence constraints for both efficiency and fairness. Proves that additive increase/multiplicative decrease gives the optimal convergence behavior. Outlines why nonlinear controls less attractive.

Background required: Basic math. Some basic system/network theory?

Discussion points: I wonder know this kind of knowledge is disseminated. These guys seem to come from a different community than the guys who wrote the other paper. One very much theory oriented, the other implementation oriented. Did they discover the additive increase/multiplicative decrease thing separately? Perhaps we should lend a patient ear to the latest theory to come out of Cory Hall? Maybe become conversant in the "network theory" language in the first place?


CONGESTION AVOIDANCE AND CONTROL

Van Jacobson and Michael J. Karels

Summary: Offer justification and experimental verification of several mechanisms in congestion control that has become the standard since then. Include slow start - self clocking and much better than additive increase during congestion avoidance. Estimate RRT - basically a low pass filter with filter grain constant - offers no justification for why a value of 0.9 is chosen (I recall from other papers than the parameter needs tuning). Exponential backoff on retransmissions - reach stability faster (this is not standard implementation these days?). Outlines additive increase/multiplicative decrease in a different way. Includes pretty convincing comparative data. Includes in appendices details of algorithm implementation.

Background required: Basic TCP.

Discussion points: As noted in previous paper. A potential criticism is the lack of description of their data gathering setup. If we want to replicate their experiment we would have to contact them for details. Come to think of it ... given their mechanisms have mostly become the standard, how WOULD we replicate their experiment today?

We could possibly include as optional reading the paper "Simulation-based Comparisons of Tahoe, Reno, and SACK TCP" by Kevin Fall and Sally Floyd. That paper basically outlines the next chapter in the congestion story - Reno, New Reno, SACK.


Thursday, September 4, 2008

2008-09-04 Readings - Interdomain Routing


ON INFERRING AUTONOMOUS SYSTEM RELATIONSHIPS IN THE INTERNET,

Lixin Gao


Summary: A study to infer the AS topology of the internet using BGP routing tables. Obtains BGP routing tables from publicly available Route Views router in Oregon (limited number of such routers). Classifies AS relationships into provider-customer (I route for you type), siblings (I route for you and you route for me), and peers (we don't route for each other). Uses notions such as uphill path and downhill path; transit relationship exist going uphill and going downhill. Identify top provider by AS size, given by AS degree (level of connectivity to other AS). Sibling relationship exist if mutual transit. Peering relationship exists if two AS are at top of the AS path, and their degrees differ by no more than a factor R. Results verified close to 100% by AT&T internal AS info; WHOIS lookup services further confirms sibling-sibling relationships. Sibling relationship inference the weakest.

Background required: BGP, which is presented in the other reading.

Criticism and discussion points: The R parameter in peering inference is really ugly and somewhat haphazard. The author acknowledged such. Still, would be good to have somewhat sound logic behind it. Did not explain why sibling inference so inaccurate even though the logic seems air-tight. The results are pretty convincing, but if the assumption that each AS path should have only one peering relationship is wrong, then their peering inference would be totally messed up. The usual inference evaluations would have a training set data to tune their R and a test set. Their dataset and the nature of their experiment prevents this.

I wonder if ISP has done similar work to infer AS relationships using their own routing tables and data.

Just had a mischievous insight ... Suppose the ISP want to keep their peering/sibling/customer info secrete ... and WHOIS is public ... and their method is published ... and their method is scientifically sound ... the black hats can do a lot of crap with this info. I wonder what Vern Paxson would think of this ...


INTERDOMAIN INTERNET ROUTING,

Hari Balakrishnan, Nick Feamster


Summary: Explains BGP in most of its gory detail ... Include notes on AS, route export/import, design goals, AS relationships, eBGP/iBGP, attributes, multi-homing issues, convergence problems, etc. Descriptive textbook reading.

Background required: Need to know Intra-AS routing etc.

Criticisms and discussion points: Textbook material ... the technology is what it is ... not much to say here ... Talk about multi-homing issues perhaps? It's real enough ...




Tuesday, September 2, 2008

2008-09-02 Readings - Internetworking Principles


END-TO-END ARGUMENTS IN SYSTEM DESIGN,


J.H. Saltzer, D.P. Reed and D.D. Clark


Summary: Articulates the end-to-end principle/argument, i.e. place functionality at the edge or end pts of the network instead of at the center. Provides examples to justify argument. Basic logic is that functionality is needed at the application level, and so should be provided at the application level to really guarantee the correct functionality. Redundant provision of the same functionality at the lower levels or in the network need not be completely "correct". Examples include reliable communication, delivery guarantee, secure transmission, duplicate msg suppression, FIFO delivery, transaction management, etc.

Background required: None.

Criticism & Discussion Points: Pretty long winded way to put forward a simple argument. Perhaps a cleaner approach would be the needs-definitions-premises-reasoning-conclusions-examples format. Examples in the paper are good. Would be good to talk about best way to gauge trade-offs for functionality in lower layers. Also should talk about deciding which functionality cannot be put end-to-end. Implication for future research - how much stuff to put end-to-end? what are the goals of the network and how does end-to-end help accomplish them? resistance to failure?



THE DESIGN PHILOSOPHY OF THE DARPA INTERNET PROTOCOLS,

David D. Clark


Summary: Outlines and discusses original goals of the DARPA internet, and discuss at length their implications on the internet architecture. The fundamental goal was "effective technique for multiplexed utilization of existing interconnected networks". Second level goals include survivability in the face of failure, accommodating different types of service, and interconnecting a variety of networks. Other goals including distributed management, cost effectiveness, low effort host attachment, and accountability are less effectively realized. Original internet a product of military thinking - minimalist design. Current use of internet - civilian use. Somewhat mismatched priorities.

Background required: None.

Criticism & Discussion Points: We need more historical paper like this - trace knowledge back to its beginning. The mismatched original design goals and current uses are definitely worth discussion. Perhaps of more important interest is design methodology. Perhaps military application projects tend to be able to get at the core fundamentals of a flexible, survivable technology, unhindered by any economic, business, or other non-technology constraints? What's the best way to invent a new tech - from corporations or from research or from the military?