1 / 19

Implementation of a parallel web proxy server with caching

Implementation of a parallel web proxy server with caching. Presented by: Kaushik Choudhary. Outline. Introduction Design Challenges Design Challenges - Solutions Implementation Experimental Setup Results. Introduction. What is an HTTP/1.0 web-proxy? Already covered by others.

pippa
Download Presentation

Implementation of a parallel web proxy server with caching

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Implementation of a parallel web proxy server with caching Presented by: KaushikChoudhary

  2. Outline • Introduction • Design Challenges • Design Challenges - Solutions • Implementation • Experimental Setup • Results

  3. Introduction • What is an HTTP/1.0 web-proxy? • Already covered by others. • How does it work? • Already covered by others. • What is a parallel web proxy with caching? • Already covered by others.

  4. Outline • Introduction • Design Challenges • Design Challenges - Solutions • Implementation • Experimental Setup • Results

  5. Design Challenges • Dozens of open-source implementations available online to take “inspiration” from. • System calls • Parallelism • Efficient Cache

  6. Design Challenges – System calls

  7. Design Challenges – Parallelism • The pseudo-code in the project description says “Spawn a worker thread to handle the connection” • What if there were a 100 connection requests? • If there is a request queue, how do threads atomically access this queue?

  8. Design Challenges – Caching • Size of cache? • Eviction policy. • Atomic and efficient access to Cache.

  9. Outline • Introduction • Design Challenges • Design Challenges - Solutions • Implementation • Experimental Setup • Results

  10. Design Challenges – Parallelism solutions • If there were a 100 requests, create a pre-determined number of threads and assign a request as task. • Use locks to access the task queue. • Atomic and efficient access to Cache.

  11. Design Challenges – Caching solutions • Currently stores at most 20 responses (web pages) [TODO – limit size] • Eviction policy used - LRU. • Store the pages and timestamps in a doubly linked-list, make a node a head when accessed, use a hashmap to index elements of this list.

  12. Outline • Introduction • Design Challenges • Design Challenges - Solutions • Implementation • Experimental Setup • Results

  13. Implementation • Used open-source object oriented design from “http://www.home.no/adiv/index.htm” • Implemented pthreads and openmp versions for parallelism. • Implemented cache as described above.

  14. Outline • Introduction • Design Challenges • Design Challenges - Solutions • Implementation • Experimental Setup • Results

  15. Experimental Setup • Created command line scripts to open 30 distinct non-https websites (google-chrome commandline). • Distinct websites avoids interference from browser cache. • Measured total time to serve all requests in threadless, different pthreadsversions and openmpversion of the code. • Conducted experiments on two machines (Core i5 2.4 GHz, 4GB RAM (desktop) and Core i5 2.67 GHz, 8GB RAM)(laptop).

  16. Outline • Introduction • Design Challenges • Design Challenges - Solutions • Implementation • Experimental Setup • Results

  17. Results – Access times (no-cache)

  18. Results – Speedup (desktop)

  19. Thank you!

More Related