1 / 51

The TCP/IP Model

The TCP/IP Model. An exchange using the TCP/IP model. Application. Datagram. Frame. Type of Addresses in TCP/IP. Relationship of layers and addresses in TCP/IP. Port number: Well-known port no.: 0 – 1023 Dynamic port no.: 1024 – 65535. IP Address: 192.168.200.4.

Download Presentation

The TCP/IP Model

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The TCP/IP Model

  2. An exchange using the TCP/IP model Application Datagram Frame

  3. Type of Addresses in TCP/IP

  4. Relationship of layers and addresses in TCP/IP Port number: Well-known port no.: 0 – 1023 Dynamic port no.: 1024 – 65535 IP Address: 192.168.200.4 MAC Address: 31:5C:BB:63:2A:D1

  5. Some well known port numbers

  6. IP Packet (L3) and Ethernet Frame (L2)

  7. Port numbers (L4), IP Packet (L3) and Ethernet Frame (L2)

  8. Transport Layer Process-to-Process Delivery: UDP & TCP

  9. Transport layer (TL) • Transport layer: Process-to-Process delivery. (A process is an application/message program running on a host). • Network layer: source-to-destination delivery (of individual packets to be treated independently). No relationship btw those packets. • Transport layer ensures that packets belong to an application arrive intact and in order, overseeing both error control and flow control. • A transport layer protocol can be either: • Connectionless: treats each segment as independent packet and delivery to TL at destination host. (UDP) • Connection-oriented: TL makes connection with destination host prior to packet delivery. (TCP, SCTP – not covered here) • A message is usually divided into several segments. UDP treat each segment separately (non-related), while TCP creates a relationship between the segments using sequence numbers. • Flow and error control in TL is performed end to end.

  10. Process-to-process Communication

  11. Position of UDP, TCP, and SCTP in TCP/IP suite

  12. Transport Layer Addressing using Port Number • Client/Server paradigm: A process from Client (local host) needs a service from a process running on Server (remote host). • Since local or remote host can run several processes, need to define: • Local host/Local process & Remote host/remote process • TL addressing using Port number (16 bit: 0 to 65535) • To choose among multiple processes running on destination host. • destination port number – delivery / Source port number – for reply. • Client port number can be randomly assigned (e.g. ephemarelport number = 52000), but Server port number must be fixed for a server process: (e.g. well-known port number = 13).

  13. Multiplexing / Demutiplexing • At sender site, there may be several processes that need to send packets, however, there is only one transport protocol at any time. (Many-to-1). • Multiplexing at TL accepts packets from different processes, differentiated them by port numbers. After adding the header, TL passes the packet to NL. • At the receiver site, the relationship is the opposite (1-to-many) that requires Demultiplexing process. • After error checking and dropping of header, TL delivers each message to appropriate process based on port number.

  14. Reliable service using Error control • Reliability at the data link layer is between two nodes (pink links) • Reliability at transport layer ensures end-to-end reliability. • Network layer is unreliable (best effort delivery) because it only concerns about routing to appointed address.

  15. User Datagram Protocol (UDP)

  16. User Datagram Protocol (UDP) • The User Datagram Protocol (UDP) is called a connectionless, unreliable transport protocol. It does not add anything to the services of IP except to provide process-to-process communication instead of host-to-host communication. Very limited error checking. • Why needed UDP if so unreliable? It’s a simple protocol using minimum overhead, hence fast delivery. e.g. a process wants to send a small message & doesn’t care about it reliability. • UDP packets have a fixed-size header of 8 bytes. UDP length = IP length – IP header’s length

  17. Encapsulation and decapsulation

  18. Multiplexing and demultiplexing

  19. Transmission Control Protocol (TCP)

  20. Transmission Control Protocol (TCP) • The Transmission Control Protocol (TCP) is called a connection-oriented, reliable transport protocol. • Like UDP it is a process-to-process protocol that uses port numbers. • Unlike UDP, TCP creates a virtual connection between two TCPs to send data. In addition, it also uses flow and error control (Reliable). • TCP is also reliable for stream-oriented protocol: allows sending and receiving streams of bytes that are related in a ‘virtual tube’. • The segments of packet are related using a sequence number.

  21. TCP • TCP offers full duplex services, in which data can flow in both direction at the same time. • Each TCP then has a sending and receiving buffer and segments move in both directions. (mostly for flow and congestion control) • When site A wants to send and receive data from another site B: • The two TCPs establish a connection between them • Data are exchanged in both directions • The connection is terminated when finish • The connection is virtual, not physical. • TCP keeps track of the segments by sequence and ACK numbers. • Some TCP segments can carry a combination of data and control information (Piggy-backing), using a sequence number and ACK. • These segments are used for connection establishment, termination or abortion.

  22. Sequence Number The bytes of data being transferred in each connection are numbered by TCP to establish the relationship between data bytes/segments being sent. The numbering starts with a randomly generated number btw 0 to 232-1. (not necessarily from zero). The sequence number is actually a range of bytes. The value in the sequence number field of a segment defines the number of the first data byte contained in that segment. The byte numbering is also used for flow and error control.

  23. Solution The following shows the sequence number for each segment: Example Suppose a TCP is transferring a file of 5000 bytes. The 1st byes is 10,001. What are the sequence number for each segment if data are sent in 5 equal segments?

  24. Imagine a TCP connection is transferring a file of 6000 bytes. The first byte is numbered 10010. What are the sequence numbers for each segment if data is sent in five segments with the first four segments carrying 1,000 bytes and the last segment carrying 2,000 bytes? Example Solution The following shows the sequence number for each segment: Segment 1  10,010 (10,010 to 11,009) Segment 2  11,010 (11,010 to 12,009) Segment 3  12,010 (12,010 to 13,009) Segment 4  13,010 (13,010 to 14,009) Segment 5  14,010 (14,010 to 16,009)

  25. ACK Number • TCP is full-duplex where two parties/sites can send and receive data at the same time. • Each party/site numbers the bytes (usual with a different starting byte/sequence number). • The sequence number in each direction shows the number of the 1st byte carried by this segment. • Each party also uses acknowledgement number defines the number to confirm the bytes it has received. • The value of the acknowledgment field in a segment defines the number of the next byte a party expects to receive. • The acknowledgment number is cumulative, which means that the party takes the number of the last byte that it has received safely and adds ‘1’ to it and announces this sum as the ACK number.

  26. For urgent data TCP header format 20 (if no option) to 60 bytes header btw 5 & 15 (*4) (Receiving) 6 different Control fields

  27. Control field Description of flags in the control field

  28. TCP Connection • TCP is connection-oriented transport protocol that establishes a virtual path between source and destination. i.e. Using a single virtual pathway for entire message, facilitates the ACK process and retransmission of damaged or lost frames or frames arrive out of order. • Although TCP uses service of IP (connectionless) to deliver individual segments, it has full control of the connection. IP is unaware of any lost or retransmission of packets but TCP does. TCP can holds it until the missing segments arrive. • In TCP, connection-oriented transmission requires 3 phases: • Connection Establishment • Data Transfer • Connection Termination • Usually involves request and acknowledge procedures in all above phases.

  29. TCP Flow Control • Unlike UDP, TCP provides the flow control mechanism. • The receiver controls the amount of data that are to be sent by the sender, to prevent overflowing at destination (by announcing the value of window size in the window size field of the TCP header). • Similar to data link layer, TCP uses sliding window (SW) and numbering system to handle flow control to make transmission more efficient as well as to control the flow of data. • However, this is done on end-to-end basis. The SW protocol is something btw Go-Back-N and Select Repeat. • Two differences from the Data Link Layer: • SW in TCP is byte oriented, but SW in DL is frame oriented • TCP SW is variable size, but DL SW is fixed size.

  30. Sliding window moving right wall to the left moving left wall to the right moving right wall to the right The 3 activities: opening, closing & shrinking are in the control of the receiver (depending on congestion in the network), not the sender. Sender must obey the commands of the receiver in this matter. Opening allows more new bytes in the buffer that are eligible for sending. Closing means some bytes have been acknowledged and the sender need not to worry about then anymore. Shrinking means shorten the size of window for congestion control purpose

  31. TCP header format Window size field defines the size of the window in bytes, that the other party must maintain. 16 bits can allow upt0 65535 bytes. This is normally refer to the receiver window (rwnd) and is determined by the receiver. The sender must obey the dictation of the receiver in this matter

  32. TCP Error Control • Similar to UDP, TCP ensure reliable delivery using error control. • This means that an application program that delivers a stream of data to TCP relies on TCP to deliver the entire stream to the other side, in orderly manner, without any error, lost or duplication. • Error control includes mechanism for detecting corrupted segments, lost segments, out-of-order segments and duplicated segments. • It also includes mechanism for correcting errors after they are detected, which is achieved through the use of: • Checksum: to check for corrupted segment and discard it. • Acknowledgement (ACK): to confirm the receipt of segments. • Time-out (TO): Timer set for retransmission of segments. • The heart of error control mechanism is Retransmission. • The retransmission happens when the TO timer is expired: meaning no receipt of ACK segment from opposite party. • No retransmission and timer set for an ACK segment.

  33. TCP header format Error Control fields: Checksum, Acknowledgement number Lost and corrupted segments are treated the same way by the receiver. Both are treated as lost!!! (Lost segments discarded somewhere in network by some routers and corrupted segments discarder by receiver itself).

  34. Congestion Control & Quality of Service

  35. Congestion Control & QoS • Congestion control & QoS are two issues that bound together closely; improving one means improving the other or ignoring one also ignoring the other. • Not only related to just Transport layer but all 3 layers involved: • Data Link layer • Network layer • Transport layer • Need to think of them as co-operated property. • Involve directly with managing data traffic

  36. DATA TRAFFIC • The main focus of congestion control and quality of service is data traffic. • In congestion control we try to avoid traffic congestion. • In quality of service, we try to create an appropriate environment for the traffic. • So, before talking about congestion control and quality of service, we discuss the data traffic itself. Traffic DescriptorTraffic Profiles

  37. Traffic descriptors

  38. Three traffic profiles

  39. CONGESTION Congestion in a network may occur if the load on the network—the number of packets sent to the network—is greater than the capacity of the network—the number of packets a network can handle. Congestion control refers to the mechanisms and techniques to control the congestion and keep the load below the capacity. Congestion happens in any system that involves waiting or queuing.

  40. Queues in a router • Two issues: • If the rate of packet arrival is higher than the packet processing rate, the input queues become longer and longer • If the packer departure rate is lesser than the packet processing, the output queues become longer and longer.

  41. Packet performance: Packet delay and throughput as functions of load • Congestion Control involves two factors that measure the performance of a network: • Delay: queuing due to processing delay and propagation delay • Throughput: number of packet passing thru in a unit of time

  42. TCP assumes that the cause of a lost segment is due to congestion in the network. If the cause of the lost segment is congestion, retransmission of the segment not only does not remove the cause, it aggravates it.

  43. CONGESTION CONTROL • Congestion control refers to techniques and mechanisms that can either prevent congestion, before it happens, or remove congestion, after it has happened. • In general, we can divide congestion control mechanisms into two broad categories: • Open-loop congestion control (prevention) and • Closed-loop congestion control (removal).

  44. Congestion control categories

  45. Congestion Control in TCP • So far, we assume that it is only receiver can dictate to the sender the size of its window and network entity has been ignored. If the network cannot deliver the data as fast as they are created by the sender, it must tell the sender to slow down. • In addition to the receiver, the network is a second entity that determines the size of the sender’s window. Hence, the sender’s window size is determined not only by the receiver but also by congestion in the network. Window size = min(rwnd,cwnd) • TCP’s general policy for handling congestion is based on 3 phases: • Slow start: Exponential Increase • Congestion avoidance: Additive Increase • Congestion detection: Multiplicative Decrease • Sender starts with a slow rate but then increase the rate rapidly until a threshold, then reduce it to a linear rate to avoid congestion. Finally, if congestion is detected, the sender goes back to the slow-start phase.

  46. 1. Slow start, exponential increase In the slow-start algorithm, the size of the congestion window increases exponentially until it reaches a threshold.

  47. 2. Congestion avoidance, additive increase In the congestion avoidance algorithm, the size of the congestion window increases additively until congestion is detected.

  48. Congestion example

  49. QUALITY OF SERVICE (QoS) • The main focus of congestion control & QoS is traffic. • In congestion control we try to avoid traffic congestion. • In quality of service, we try to create an appropriate environment for the traffic. • QoS is an internetworking issue that has been discussed more than defined. We can informally define quality of service as something a flow seeks to attain. Flow CharacteristicsFlow Classes

  50. Flow characteristics • Reliability: vital characteristic that a flow needs. Losing reliability means losing a packet or ACK, which entails retransmission. Some application needs reliability more than others; e.g. email, file transfer and Internet access require reliable transmission more than audio conferencing. • Delay: degree of tolerance for later packet. Audio conferencing needs minimum delay but delay in file transfer or email is less crucial. • Jitter: variation in delay for packets belonging to the same flow. High jitter means the difference between delay is large. Low jitter means low variation of delay between packets. • Bandwidth: Different application needs different bandwidth. Video transmission needs millions of bps to refresh the screen while email needs minimum bandwidth to send a file.

More Related