1 / 18

The Design Philosophy of DARPA Internet Protocols

The Design Philosophy of DARPA Internet Protocols. David D. Clark. History. DARPA’s initiative to build a suite of protocols began 30 years ago. Intended for packet switched networks. Widely used in military and commercial systems.

heulwen
Download Presentation

The Design Philosophy of DARPA Internet Protocols

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Design Philosophy of DARPA Internet Protocols David D. Clark.

  2. History • DARPA’s initiative to build a suite of protocols began 30 years ago. • Intended for packet switched networks. • Widely used in military and commercial systems. • Author joined project in mid-70’s, took over architectural responsibility in 1981.

  3. Fundamental Goals • Interconnectivity • provide some larger service. • integrate autonomous units. • Multiplexing Technique : Packet Switching. • rlogin. • existing networks. • Interconnecting switches would employ the “Store and Forward” technique. [Gateways]

  4. 2nd Level Goals • Survivability. • Support service multiple types. • Accommodate a wide variety of networks. • Permit distributed management of resources. • Be cost-effective. • Permit host-attachment with a low-level of effort. • Permit resource accountability.

  5. 2nd Level Goals [Discussion] • Features given in order of importance. • Author stresses : “Interconnection first, Survivability next, rest come later”. • (Hence) Original architects paid less attention to accountability and security. • Changing priorities can change architecture. • Nowadays accountability is more important : QoS.

  6. Survivability • Communicating entities should be able to continue conversation (in face of failure) without re-establishing or resetting high level state. • How? By masking Transient failures by hiding Synchronization from client (application). • Conversation details. • Where to store state ? • Switches… replication… distributed…complex. • End-Points [Fate-Sharing]. [End-To-End Argument]

  7. Survivability [contd.] • Chose Fate-Sharing. • Protects against any number of intermediate failures. • Easier to engineer. • Enter “Soft-State”, refresh based mechanisms. • Fate-Sharing consequences for Survivability • gateways become stateless. • More trust placed in host machine.

  8. Types of Service • Support variety of service types. • E.g. differing requirements in speed (ftp), latency (rlogin) and reliability. • Real-Time delivery of digitized speech. • Clearly more than one transport layer required else would be too complex and “fat” for most applications. [mismatch between application needs and supplied primitives]. • Did not wish to make any assumptions since could add service types in the future… use datagram as basic building block. • “Best Effort delivery”

  9. Varieties of Networks • Success of the “Internet” was to be able to incorporate and utilize a wide variety of network technologies. • Achieves flexibility by makes a minimum set of assumptions about the networks. • Network can transport a packet/datagram. • Packet should be delivered with good but not perfect reliability. • Network should have some form of addressing. • Other services can be built on top of these at endpoints.

  10. Distributed Management • Several “Autonomous Systems (AS)” connected by gateways that are independently managed. • 2 level routing hierarchy which permits gateways from different organizations to exchange routing tables. • Organizations which manage gateways are not necessarily the same organizations that manage the networks to which the gateways are attached. • Private routing algorithms within gateways of a single organization.

  11. Cost Effectiveness & Accountability • Small data packets have large overheads (in terms of headers etc.). • Reliable communication system example from the end-to-end argument paper. • Only currently being studied. (Author has a later paper Differentiated Services).

  12. Attaching a host to the Internet • Author states that cost of attaching a host to the Internet is somewhat higher that in other architectures. [didn’t quite get this] • Poor implementation of host resident mechanisms can hurt the network as well as the host. • Used to be a limited problem… localized… now grown big. • Fate-sharing not good if host misbehaves. [monitors/watch-dogs… security papers… later].

  13. Architecture & Realization • Architecture • Realization - particular set of networks, gateways and hosts which have been connected in the context of Internet Architecture. • Wide variability in realizations by way of performance and characteristics. • The various Autonomous networks constituting the Internet could have different topologies. Redundancy could be added depending on need. • The Internet Architecture tolerates the variety by design.

  14. Implementation • Protocol verifiers confirm logical correctness but omit performance issues. • Even after demonstration of logical correctness, design changes (when faults are discovered) can cause performance degradation by an order of magnitude. • These difficulties arose because of tensions between architectural goals (not to constrain performance, permit variability) and because no formal tools for describing performance existed.

  15. Datagrams • Gateways do not need to maintain connection state. • Basic building block out of which a variety of services can be built. • Represent minimum network service assumption. • The author repeated clarifies that the datagram is not intended to a service in itself. [gives several examples].

  16. TCP • Originally flow control was based on both bytes and packets. • Discarded… too complex. • TCP => Byte stream. • Permit TCP to fragment a packet so can pass through networks with smaller MTUs. However this was later moved to IP layer. • Permit small packets to be gathered to a bigger packet. • Allowed insertion of control information in byte sequence space… so control could also be acknowledged… dropped. • In retrospect, need both byte and packet control. [???]

  17. TCP contd. • EOL Flag : • Original idea – break byte stream into records. • Different records => different packets…goes against combining packets especially on retransmission. • Hence semantics was changed – “data upto this point is a record”. • Now 1 record => 1 or more packets => combining was possible due to presence of delimiters. • Various application now had to invent a way of delimiting records. • Some rare problem. EOL had to use up all sequence space upto next value.

  18. Conclusions • The Internet architecture has been very successful. • Datagrams have solved high priority problems, but made it difficult to solve low priority ones. E.g. accountability. • This is due to the stateless nature of switches and the “elemental” nature of datagrams as a building block. • Author identifies a better building block : “Flow” • Accountability/Service differentiation : DiffServ.

More Related