1 / 21

Web Cache

Characterizing Roles of Front-end Servers in End -to- End Performance of Dynamic Content Distribution . Web Cache . 46842197 Li ZHANG 78884704 Dakuo WANG 30165502 Xuejie SUN 37324635 Yang LIU. Introduction of Web Cache Related Paper Overview

dalila
Download Presentation

Web Cache

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Characterizing Roles of Front-end Servers in End-to-End Performance of Dynamic Content Distribution Web Cache • 46842197 Li ZHANG • 78884704 Dakuo WANG • 30165502 Xuejie SUN • 37324635 Yang LIU

  2. Introduction of Web Cache Related Paper Overview 2.1 Summary Cache: A Scalable Wide-Area Web Cache Sharing Protocol 2.2 Going Viral: Flash Crowds in an Open CDN 3. Characterizing Roles of Front-end Servers in End-to-End Performance of Dynamic Content Distribution 3.1 Problem Definition 3.2 Motivation 3.3 Model 3.4 Result 3.5 Conclusion 3.6 Pro & Con 4. Q & A

  3. CONCEPT Web cache is a mechanism for the temporary caching of web documentsto reduce bandwidth usage, server load, and perceived lag. TYPES OF PROXY SERVER Forward proxies, Open proxies, Reverse proxies, Performance Enhancing Proxies USES OF PROXY SERVER To speed up access to resources . To control access to internal resources. To filter content. To hide the real IP. To circumvent Internet filtering to access content otherwise blocked by governments. To breakthrough own IP access restrictions. Introduction to Web Caching(Proxy Server)

  4. Summary Cache: A Scalable Wide-Area Web Cache Sharing Protocol Li Fan, Member, IEEE, Pei Cao, Jussara Almeida, and Andrei Z. Broder

  5. Summary Cache: A Scalable Wide-Area Web Cache Sharing Protocol Li Fan, Member, IEEE, Pei Cao, Jussara Almeida, and Andrei Z. Broder Internet Cache Protocol (ICP) Simple Cache Sharing: fetch and store locally No load balancing Overhead: UDP messages (factor of 73 to 90), network traffic (8% - 13%), client HTTP request latency (8%-12%) Summary Cache Each proxy store a summary of its directory of cached document in every other proxy Cache miss, check the summaries to see if it exist in other proxies Summary Cache Enhanced ICP (SC-ICP) Add new opcode in ICP version 2 Introduce additional header follows regular ICP header Modify Squid 1.1.4 software to implement the protocol

  6. Going Viral: Flash Crowds in an Open CDN Patrick Wendell, Michael J. Freedman Flash Crowds on CoralCDN CoralCDN: an open Content Distribution Network (CDN) running at several hundred POPs Flash Crowds: a period over which request rates for a particular fully-qualified domain name are increasing exponentially average per minute request rate over a particular period ti 4 years CDN traffic, 33 billion HTTP requests Analysis conclusion: Potential benefits of cooperative vs. independent caching by CDN node The efficacy of elastic redirection and resource provisioning The ecosystem of portals, aggregators and social networks

  7. Going Viral: Flash Crowds in an Open CDN Patrick Wendell, Michael J. Freedman Flash Crowd Cacheability The degree of caches coordination in fetching origin content CoralCDN uses a distributed hash table for global content discovery Commercial CDNs, Akamai, use non-cooperative caching, where each remote proxy independently fetches content from the origin site Fewer requests to origin site, higher complexity and additional overhead

  8. Going Viral: Flash Crowds in an Open CDN Patrick Wendell, Michael J. Freedman Flash Crowd Cacheability

  9. The Motivations Most content on the Internet is stored at data centers in the cloud, and they are dynamic for user’s request. The scale and cost of building and operating large-scale powerful data centers are increasing. The way to improve the overall response time is to deploy “proxy” servers closer to users. FE servers can be exploited to improve the user-perceived performance due to:1) A portion of the dynamic content may be static; thus can be cached and delivered immediately from the FE servers. 2) Via split TCP connections, a FE server can establish a persistent TCP connection with the data center which not only eliminates the effect of TCP slow-start between the FE and BE, but also reduce the RTT between the user and the server.

  10. The Problem Authors conduct an active measurement-based comparative study of Google and Microsoft Bing web search services. Use the PlanetLab nodes to perform extensive measurements of Google and Bing search services using a variety of keyword search, and collect dynamically generated content and application-layer measurement data. Use these collected data to analysis the role of FE.

  11. How to solve it They develop an in-house user search query emulator, which performs exactly the same functionality as the web-based search box. Theyconduct extensive measurements by submitting the same search queries to both Bing and Google search engines, and collect detailed TCPdump with full application-layer payloads. Perform two sets of experiments: 1) In the first set, search queries are launched from all measurement nodes to their default 3 FE servers every 10 seconds. 2) In the second set, they fix one FE server (of Bing or Google respectively) at a time, and launch queries from all measurement nodes to this server.

  12. Content distribution • Content includes static and dynamic (i.e., search results) • Static portion: HTTP header, HTML header, CSS style files and the static menu bar. • Dynamic portion: keyword-dependent menu bar, search results and ads. • Static portion is cached and directly delivered by FE servers. Dynamic portion is generated by BE data centers and them passed onto the FE servers for delivery. • The experiment shows Tdynamic varies significantly with the types of search keywords used, whereas Tstatic is mostly insensitive.

  13. Several parameters:Tb: start of TCP three-way handshake T1: HTTP GET requestT2: receive packets from server T3/T4: receive first/last packet containing the static portion T5/T6: receive first/last packet containing the dynamic portion

  14. Observation: • Tstaticdepends mostly on the time to generate and deliver the static content portion at the FE server. • When RTT is small, Tdynamic is roughly a constant while Tdeltadecreases • as a function of RTT. • When RTT increases beyond a certain threshold, the dynamic content portion will be received by the FE server before the static content portion is • entirely delivered to the client. Hence Tdynamic increases as • a function of RTT, while Tdelta becomes zero.

  15. Performance • First cluster represents the three-way TCP handshake between the • client and the FE server. The second and third cluster represent • the delivery of static and dynamic contents. • As the RTT increases, the gap between the end of the second and the beginning of the third clusters decreases, and eventually the two are • lumped together.

  16. Google has slightly farther FE servers from the clients, but has significantly lower Tstaticand Tdynamic. • These results illustrate that placing FE servers closer to clients does not necessarily reduce Tstaticand Tdynamic. • The x-axis represents the PlanetLab nodes, and the yaxisrepresents the box-plot for the distribution for different samples. • The results show that comparing Google, users using the Bing search service tend to experience slightly longer and more variable overall response times.

  17. Comparing Bing & Google Performance and discuss • The fetch time between Google FE servers and BE data centers tends to be smaller and more stable. In contrast, fetch time between Akamai FE servers and Bing data centers tends to be larger and shows higher variability. • Although Bing place FE servers closer to client, it has significantly higher Tstatic and Tdynamic compare to Google. The reason for this may be due to the higher and more variable loads at Akamai FE server, as Bing shared with other servicers. • The end to end performance is determined solely by the FE-BE fetch time. Tfetch consists of two key components: Tproc and RTTbe

  18. Several Results of This Paper • FE severs do not cache any dynamically generated search result. It only cache the static information, such as Http header, Html header. • Placing FE closer to users can improve user-perceived performance. • There is a trade-off between placement of FE severs and the FE-BE fetch time. • There is a threshold within which placing FE further closer to users is no longer helpful. • While placing FE severs closer to users can help reduce latency, other key factors, such as processing times, loads at FE/BE data centers, and the quality of connections between them also play a critical role in determining the overall user-perceived performance. • Improving and optimizing these factors are important for overall user-perceived performance in dynamic content distribution such as dynamic generation of search results in response to user requires.

  19. Strong point of the paper • This paper investigated the role of FE sever in improving user-perceived performance of dynamic content distribution, which is emerging as the next big business for CDN. • This paper developed a good and simple model-based inference framework to measure and quantify the frontend-to-backend fetching time, which contains the query processing time at BE and delivery time between BE and FE. • They used Bing and Google search services, and performed extensive network measurement and analysis, based on several sets of experiments. • This paper also took into consideration about the difference between the FE of Bing and FE of Google.

  20. Weakness of the Paper • In this paper, they focused on standard search functions of search engines. However, more recently, some search engines introduced more advanced search features such as the interactive feature. By using this feature, after each letter user typed, a separate query is sent to the FE sever. And subsequent queries are highly correlated. • Most nodes they used for test may introduce some unfairness between Bing and Google (because they are placed closer to Bing FE sever). • No significant packet loss during the measurements. In a high loss rate environment, placing FE closer to users may significantly improve the user-perceived end-to-end performance.

  21. Thanks Any Questions?

More Related