480 likes | 630 Views
Announcements: Lab1 is due today Lab2 is posted and is due next Tuesday. ECE/CS 372 – introduction to computer networks Lecture 5. Acknowledgement: slides drawn heavily from Kurose & Ross. Chapter 1: recap. By now, you should know: the Internet and its components
E N D
Announcements: Lab1 is due today Lab2 is posted and is due next Tuesday ECE/CS 372 – introduction to computer networksLecture 5 Chapter 2, slide: Acknowledgement: slides drawn heavily from Kurose & Ross
Chapter 1: recap By now, you should know: • the Internet and its components • circuit-switching networks vs. packet-switching networks • different network access technologies • the three Tiers 1, 2, and 3 • layered architecture of networks • types of delays and throughput analysis Chapter 2, slide:
Our goals: aspects of network application protocols transport-layer service models client-server paradigm peer-to-peer paradigm learn about protocols by examining popular application-level protocol: HTTP Chapter 2: Application Layer Chapter 2, slide:
e-mail web instant messaging remote login P2P file sharing multi-user network games streaming stored video clips voice over IP real-time video conferencing grid computing Some network apps Chapter 2, slide:
write programs that run on (different) end systems communicate over network e.g., web server software communicates with browser software little software written for devices in network core network core devices do not run user applications application transport network data link physical application transport network data link physical application transport network data link physical Creating a network app Chapter 2, slide:
Principles of network applications app architectures app requirements Web and HTTP P2P file sharing Chapter 2: Application layer Chapter 2, slide:
Application architectures There are 3 types of architectures: • Client-server • Peer-to-peer (P2P) • Hybrid of client-server and P2P Chapter 2, slide:
client/server Client-server architecture server: • always-on • fixed/known IP address • serves many clients at same time clients: • communicate with server only • may be intermittently connected • may have dynamic IP addresses • do not communicate directly with each other E.g., of client-server archit.: • Google, Amazon, MySpace, YouTube, Chapter 2, slide:
peer-peer Pure P2P architecture • no always-on server • arbitrary end systems directly communicate • peers are intermittently connected and change IP addresses • example: BitTorrent Pros and cons: • scalable and distributive • difficult to manage • not secure Chapter 2, slide:
Hybrid of client-server and P2P Skype • voice-over-IP P2P application • centralized server: finding address of remote party • client-client connection: direct (not through server) Instant messaging • chatting between two users is P2P • centralized service: client presence location • user registers its IP address with central server when it comes online • user contacts central server to find IP addresses of buddies Chapter 2, slide:
Process: is program running within a host. processes in same host communicate using inter-process communication (managed by OS). processes in different hosts communicate by exchanging messages Client process: process that initiates communication Server process: process that waits to be contacted Processes communicating • Note: applications with P2P architectures have client processes & server processes Chapter 2, slide:
host or server host or server controlled by app developer process process socket socket Internet TCP with buffers, variables TCP with buffers, variables Sockets • process sends/receives messages to/from its socket • socket analogous to door • sending process shoves message out door • sending process relies on transport infrastructure on other side of door which brings message to socket at receiving process controlled by OS • App Program Interface (API): (1) choice of transport protocol; (2) ability to fix a few parameters Chapter 2, slide:
to receive messages, process must have identifier host device has unique 32-bit IP address Q: does IP address of host on which process runs suffice to identify the process? identifierconsists of: IP address (host) port numbers (process) Example port numbers: HTTP server: 80 Mail server: 25 to send HTTP message to gaia.cs.umass.edu web server: IP address:128.119.245.12 Port number:80 more shortly… Addressing processes • A: No, many processes can be running on same host Chapter 2, slide:
Types of messages exchanged, e.g., request, response Message syntax: what fields in messages & how fields are delineated Message semantics meaning of information in fields Rules for when and how processes send & respond to messages Public-domain protocols: defined in RFCs allows for interoperability e.g., HTTP, SMTP Proprietary protocols: e.g., Skype App-layer protocol defines Question: why do we need an “App-layer protocol” ? Chapter 2, slide:
Data loss/reliability some apps (e.g., audio) can tolerate some loss other apps (e.g., file transfer, telnet) require 100% reliable data transfer Timing some apps (e.g., Internet telephony, interactive games) require low delay to be “effective” What transport service does an app need? • Bandwidth • some apps (e.g., multimedia) require minimum amount of bandwidth to be “effective” • other apps (“elastic apps”) make use of whatever bandwidth they get • Security • what about it !!! Chapter 2, slide:
Transport service requirements of common apps Time Sensitive no no no yes, 100’s msec yes, few secs yes, 100’s msec yes and no Application file transfer e-mail Web documents real-time audio/video stored audio/video interactive games instant messaging Bandwidth elastic elastic elastic audio: 5kbps-1Mbps video:10kbps-5Mbps same as above few kbps up elastic Data loss no loss no loss no loss loss-tolerant loss-tolerant loss-tolerant no loss Chapter 2, slide:
TCP service: connection-oriented: setup required between client and server processes reliable transport between sending and receiving process flow control: sender won’t overwhelm receiver congestion control: throttle sender when network overloaded does not provide: timing, minimum bandwidth guarantees UDP service: unreliable data transfer between sending and receiving process does not provide: connection setup, reliability, flow control, congestion control, timing, or bandwidth guarantee Q: why bother? Why is there a UDP? What services do Internet transport protocols provide? Chapter 2, slide:
Internet apps: application, transport protocols Application layer protocol SMTP [RFC 2821] Telnet [RFC 854] HTTP [RFC 2616] FTP [RFC 959] proprietary (e.g. RealNetworks) proprietary (e.g., Vonage, Skype) Underlying transport protocol TCP TCP TCP TCP TCP or UDP typically UDP Application e-mail remote terminal access Web file transfer streaming multimedia Internet telephony Chapter 2, slide:
Principles of network applications app architectures app requirements Web and HTTP P2P file sharing Chapter 2: Application layer Chapter 2, slide:
www.someschool.edu/someDept/pic.gif path name host name Web and HTTP First some terminologies: • Web page consists of objects • Object can be HTML file, JPEG image, Java applet, audio file,… • Web page consists of base HTML-file which includes several referenced objects • Each object is addressable by a URL • Example URL (Uniform Resource Locator): Chapter 2, slide:
HTTP: hypertext transfer protocol Web’s appl-layer protocol client/server model client: browser that requests, receives, “displays” Web objects server: Web server sends objects in response to requests HTTP 1.0: RFC 1945 HTTP 1.1: RFC 2068 HTTP overview: app architecture HTTP request PC running Explorer HTTP response HTTP request Server running Apache Web server HTTP response Mac running Navigator Chapter 2, slide:
Uses TCP: client initiates TCP connection (creates socket) to server, port 80 server accepts TCP connection from client HTTP messages exchanged between browser (HTTP client) and Web server (HTTP server) TCP connection closed HTTP is “stateless” server maintains no information about past client requests HTTP overview (continued) aside • Protocols that maintain “state” are complex! • past history (state) must be maintained • if server/client crashes, their views of “state” may be inconsistent, must be reconciled Chapter 2, slide:
Nonpersistent HTTP At most one object is sent over a TCP connection. HTTP/1.0 uses nonpersistent HTTP Persistent HTTP Multiple objects can be sent over single TCP connection between client and server. HTTP/1.1 uses persistent connections in default mode HTTP connections Chapter 2, slide:
Suppose user enters URL www.someSchool.edu/someDepartment/home.index 1a. HTTP client initiates TCP connection to HTTP server (process) at www.someSchool.edu on port 80 Nonpersistent HTTP (contains text, references to 10 jpeg images) 1b. HTTP server at host www.someSchool.edu waiting for TCP connection at port 80. “accepts” connection, notifying client 2. HTTP client sends HTTP request message (containing URL) into TCP connection socket. Message indicates that client wants object someDepartment/home.index 3. HTTP server receives request message, forms response message containing requested object, and sends message into its socket time Chapter 2, slide:
5. HTTP client receives response message containing html file, displays html. Parsing html file, finds 10 referenced jpeg objects Nonpersistent HTTP (cont.) 4. HTTP server closes TCP connection. time 6.Steps 1-5 repeated for each of 10 jpeg objects Chapter 2, slide:
initiate TCP connection RTT request file time to transmit file RTT file received time time Non-Persistent HTTP: Response time Definition of RTT (round trip time): time to send a small packet to travel from client to server and back. Response time: • one RTT to initiate TCP connection • one RTT for HTTP request and first few bytes of HTTP response to return • file transmission time = 2RTT + transmit time Chapter 2, slide:
initiate TCP connection RTT request file time to transmit file RTT file received time time Non-Persistent HTTP: issues Nonpersistent HTTP: • Name some issues?? • requires 2 RTTs per object • E.g., a 10-object page needs ~ 20 RTTs • Open/close TCP connection for each object => OS overhead • Any ideas for improvement? Chapter 2, slide:
Persistent HTTP server leaves connection open after sending response subsequent HTTP messages between same client/server sent over open connection reduces response time Persistent without pipelining: client issues new request only when previous response has been received one RTT for each referenced object Persistent with pipelining: default in HTTP/1.1 client sends requests as soon as it encounters a referenced object as little as one RTT for all the referenced objects Persistent HTTP Chapter 2, slide:
initiate TCP connection RTT request file time to transmit file RTT file received time time Web and HTTP: Review Question • A HTTP request consists of: • 1 basic html object • 2 referenced JPEG objects • Each object is of size = 106bits • RTT = 1 second • Transmission rate = 1Mbps • Consider transmission delay of objects only • Question: how long it takes to receive the entire page: • Non-persistent connection • Persistent without pipelining • Persistent with pipelining Chapter 2, slide:
initiate TCP connection RTT request file time to transmit file RTT file received time time Web and HTTP: Review Question • A HTTP request consists of: • 1 basic html object • 2 referenced JPEG objects • Each object is of size = 106bits • RTT = 1 second • Transmission rate = 1Mbps • Consider transmission delay of objects only • Answer: (transmit time = 1 sec) a) 3+3+3=9 sec (initiate + request + transmit) for each of all 3 b) 1+2+2+2=7 sec initiate + (request + transmit) for each of all 3 c) 1+2+3=6 sec initiate + (request + transmit for basic) + (one request for 2 + two transmits, one for each of the 2 objects) Chapter 2, slide:
Announcements: Lab2 is due next Tuesday ECE/CS 372 – introduction to computer networksLecture 6 Chapter 2, slide: Acknowledgement: slides drawn heavily from Kurose & Ross
If page is needed, browser requests it from the Web cache Q: what if object not in cache?? cache requests object from origin server, then returns object to client HTTP request HTTP request HTTP response HTTP response HTTP request HTTP response Web caches (or proxy server) Goal: satisfy client request without involving origin server origin server Proxy server client client origin server Chapter 2, slide:
cache acts as both client and server typically cache is installed by ISP (university, company, residential ISP) Why Web caching? reduce response time for client request reduce traffic on an institution’s access link. More about Web caching Chapter 2, slide:
Assumptions avg. object size = 0.1x106bits avg. request rate from institution to origin servers = 10/sec Internet delay = 2 sec Consequences utilization on LAN = 10% (LAN: local area network) utilization on access link = 100% total delay = Internet delay + access delay + LAN delay = 2 sec + sec + milliseconds unacceptable delay! Caching example origin servers public Internet 1 Mbps access link institutional network 10 Mbps LAN institutional cache Chapter 2, slide:
possible solution increase bandwidth of access link to, say, 10 Mbps consequence utilization on LAN = 10% utilization on access link = 10% Total delay = Internet delay + access delay + LAN delay = 2 sec + msecs + msecs often a costly upgrade total delay still dominated by Internet delay Caching example (cont) origin servers public Internet 10 Mbps access link institutional network 10 Mbps LAN institutional cache Chapter 2, slide:
2nd possible sol: web cache suppose hit rate is 0.4 (typically, between 0.3 & 0.7) consequence 40% requests will be satisfied almost immediately and 60% requests satisfied by origin server utilization of access link reduced by 40%, giving an access delay in the order of milliseconds; say 10 millisec Caching example (cont) origin servers public Internet 1 Mbps access link institutional network 10 Mbps LAN • Total delay = 0.4x(0.1) (LAN) + • 0.6x(0.1+0.1+2) (LAN + access + Internet) = (about) 1.3 second • total avg delay reduced by about 40% institutional cache Chapter 2, slide:
Advantages are obvious: Reduce response time Reduce internet traffic Web cache (cont) • Solution • Upon receiving a web request, a cache must consult origin server to check whether the requested page is up-to-date • Conditional GET method • What: Sent by cache to origin server: check page status • When: For each new request: client checks with cache • Any problems with caches?? • Local cache copies of web pages may not be up-to-date?? • What do we do then? Chapter 2, slide:
Goal: don’t send object if cache has up-to-date version How: cache specifies date of cached copy in HTTP request If-modified-since: <date> Server: response contains no object if cached copy is up-to-date: HTTP/1.0 304 Not Modified cache server HTTP request msg If-modified-since: <date> object not modified HTTP response HTTP/1.0 304 Not Modified HTTP request msg If-modified-since: <date> object modified HTTP response HTTP/1.0 200 OK <data> Conditional GET Chapter 2, slide:
Principles of network applications app architectures app requirements Web and HTTP P2P file sharing Chapter 2: Application layer Chapter 2, slide:
There are 2 approaches Centralized: Client-server architecture Distributed: P2P architecture (e.g., BitTorrent) Bob server peers obtain list of peers trading chunks peer Alice File sharing approaches server Chapter 2, slide:
File sharing: P2P vs. client-server architectures Client-Server Single point of failure Not scalable More secure Bottleneck P2P Fault-tolerant Scalable Less secure Better Robustness to failure Scalability Security Performance Chapter 2, slide:
us: server upload bandwidth Server ui: client/peer i upload bandwidth u2 u1 d1 d2 us di: client/peer i download bandwidth File, size F dN Network (with abundant bandwidth) uN Comparing Client-Server, P2P architectures Question : What is the file distribution time: from one server to N hosts? Chapter 2, slide:
Time to distribute F to N clients using client/server approach = dcs> max {NF/us, F/min(di)} i Client-server: file distribution time Server • server sequentially sends N copies: • NF/ustime • client i takes F/di time to download u2 F u1 d1 d2 us Network (with abundant bandwidth) dN uN increases linearly in N (for large N) Chapter 2, slide:
dcs= max {NF/us, F/min(di)} i Client-server: file distribution time • We now show that the distribution time is actually equal to max{NF/us, F/min(di)} • See board notes Chapter 2, slide:
dP2P > max {F/us, F/min(di) , NF/(us + Sui) } i=1,N P2P: file distribution time Server • server must send one copy: F/ustime • client i takes F/di time to download u2 F u1 d1 d2 us Network (with abundant bandwidth) dN • NF bits must be downloaded • - NF bits must be uploaded • - Fastest possible upload rate • (assuming all nodes sending file chunks to same peer) is: • us + Sui uN i=1,N Chapter 2, slide:
Comparing Client-server, P2P architectures Chapter 2, slide:
application architectures client-server, P2P application service requirements: reliability, bandwidth, delay Web and HTTP Non-Persistent, persistent, web cache Distribution time Client-server, P2P We covered general concepts, like: Chapter 2: Summary Chapter 2, slide: