1 / 22

Implementation

Implementation. Objectives. Understand Protocol architecture Identify elements that have a major impact on performance. Be able to calculate limits on bandwidth for different architectures. Seven Layer Model. Application Presentation Session Transport. Application Presentation

xenos
Download Presentation

Implementation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Implementation

  2. Objectives • Understand Protocol architecture • Identify elements that have a major impact on performance. • Be able to calculate limits on bandwidth for different architectures

  3. Seven Layer Model Application Presentation Session Transport Application Presentation Session Transport Email, FTP, www cinteger size, big endian synchronization, name space reliability, congestion control Routing address framing errors electrical signals Network Data Link Physical Network Data Link Physical Network Data Link Physical Network Data Link Physical

  4. Internet Architecture • Internet Engineering Task Force (IETF) • Application vs Application Protocol (FTP, HTTP) FTP HTTP TFTP NV Application TCP UDP TCP UDP IP IP Network . . . . NET 1 NET NET 2 n Ethernet Modem

  5. Process per protocol 2) Upper Layer puts message in TCP Input Queue and blocks on input queue (Context Switch) • Context switch at each protocol boundary • No Semaphores • Incoming and outgoing queue • Slow Input Queue 1) Protocol blocks since queue is empty (context Switch) TCP 3) TCP is awakened, scans Input Queue 4) TCP processes message and writes the message to the input queue of the next layer (Context switch) Input Queue IP

  6. Process per message • Procedure call at each protocol boundary • Semaphores needed • Faster • Error prone because of multiple threads running in a single protocol Tcp.push(message) { Ip.push(message+IPheader); } Ip.push() { Ethernet.push(message+ IPheader+Etherheader); } TCP IP Ethernet

  7. newbuffer Adding the header message IP.push(char message[], msglen) /* IP Header = 10 bytes */ newbuffer = malloc(msglen+10); memcpy(newbuffer, ipheader,10); memcpy(newbuffer+10,message, msglen);

  8. Messages Message Ethernet Header IP Header TCP Header Data

  9. Avoiding Copies • Assume Data rate of 600Mbps=73MBps • Assume 16MHz memory bus that is 16 bits wide, this results in 32MBps • For a 1 MB message, one copy takes 1/32 sec • The resulting maximum data rate is 32MBps • Two copies take 1/32 sec + 1/32 sec = 1/16 sec • The resulting maximum data rate is 16MBps • Copies are necessary between user and system space if no special interface

  10. Another Case Study • Itanium 4 processor 6.4GB/s memory bandwidth • 1GB/s per processor • Each copy between layers takes a read and a write • 1KB packet takes 2usec = 500MB/sec=4Gbps • With 2 copies 1KB takes 4usec=250MB/sec • With 4 copies 1KB takes 8usec = 125MB/sec=800Mbps

  11. Technologies

  12. UNIX I/O Kernel Structure

  13. STREAMS • STREAM – a full-duplex communication channel between a user-level process and a device • A STREAM consists of: - STREAM head interfaces with the user process - driver end interfaces with the device- zero or more STREAM modules between them. • Each module contains a read queue and a write queue • Message passing is used to communicate between queues

  14. The STREAMS Structure

  15. I/O Hardware • Incredible variety of I/O devices • Common concepts • Port • Bus (daisy chain or shared direct access) • Controller (host adapter) • I/O instructions control devices • Devices have addresses, used by • Direct I/O instructions • Memory-mapped I/O

  16. A Typical PC Bus Structure

  17. Device I/O Port Locations on PCs (partial)

  18. Interrupts • CPU Interrupt request line triggered by I/O device • Interrupt handler receives interrupts • Maskable to ignore or delay some interrupts • Interrupt vector to dispatch interrupt to correct handler • Based on priority • Some unmaskable • Interrupt mechanism also used for exceptions

  19. Interrupt-Driven I/O Cycle

  20. Intel Pentium Processor Event-Vector Table

  21. Direct Memory Access • Used to avoid programmed I/O for large data movement • Requires DMA controller • Bypasses CPU to transfer data directly between I/O device and memory

  22. Six Step Process to Perform DMA Transfer

More Related