1 / 44

Evaluation of Data and Request Distribution Policies in Clustered Servers

Evaluation of Data and Request Distribution Policies in Clustered Servers. Adnan Khaleel and A. L. Narasimha Reddy Texas A&M University adnan,reddy@ee.tamu.edu. Introduction. Internet use has skyrocketed 74MB/month in ‘92, several gigabytes/hour today

nancy
Download Presentation

Evaluation of Data and Request Distribution Policies in Clustered Servers

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluation of Data and Request Distribution Policies in Clustered Servers Adnan Khaleel and A. L. Narasimha Reddy Texas A&M University adnan,reddy@ee.tamu.edu

  2. Introduction • Internet use has skyrocketed • 74MB/month in ‘92, several gigabytes/hour today • Trend can be expected to grow in coming years • Increasing load has placed burdens on hardware and software beyond their original designs

  3. Introduction (cont’d) • Clustered Servers are viable solutions

  4. Issues in Clustered Servers • Need to present a single server image • DNS aliasing, magic routers etc • Multiplicity in Back-End Servers: • How should data be organized on back-end ? • How should incoming requests be distributed amongst the back-end servers ?

  5. Issues in Clustered Servers (cont’d) • Data Organization • Disk Mirroring • Identical data maintained on all back-end servers • Every machine able to service requests without having to access files on other machines. • Several redundant machines present, good system reliability • Disadvantages • Inefficient use of disk space • Data cached on several nodes simultaneously

  6. Issues in Clustered Servers (cont’d) • Data Organization (cont’d) • Disk Striping • Borrowed from Network File Servers • Entire data space divided over all the back-end servers • Portion of file may reside on several machines • Improve reliability through parity protection • For large file accesses, automatic load distribution • Better access times

  7. Issues in Clustered Servers (cont’d) • Locality • Taking advantage of files already cached in back-end server’s memory • For clustered Server System • Requests accessing same data be sent to the same set of servers

  8. Issues in Clustered Servers (cont’d) • Distribution Vs Locality ? • Load balanced system • Distribute requests evenly among back-end servers • Improve hit-rate and response time • Maximize locality • Current studies focus only on one aspect and ignore the other

  9. Request Distribution Schemes (cont’d) • Round Robin Request Distribution

  10. Request Distribution Schemes (cont’d) • Round Robin Request Distribution (cont’d) • Requests distributed in a sequential manner • Results in ideal distribution • Does not take server loading into account • Weighted Round Robin • Two Tier Round Robin • Cache Hits purely coincidental

  11. Request Distribution Schemes (cont’d) • Round Robin Request Distribution (cont’d) • Every back-end server has to cache the entire content of the Server • Unnecessary duplication of files in cache • Inefficient use of cache space • Back-ends may see different queuing times due to uneven hit rates

  12. Request Distribution Schemes (cont’d) • File Based Request Distribution

  13. Request Distribution Schemes (cont’d) • File Based Request Distribution (cont’d) • Locality based distribution • Partition file-space and assign a partition to each back-end server • Advantages • Does not suffer from duplicated data on cache • Based on access patterns, can yield high hit rates

  14. Request Distribution Schemes (cont’d) • File Based Request Distribution (cont’d) • Disadvantages • How to determine file-space partitioning ? • Difficult to partition so requests load back-ends evenly • Dependent on client access patterns, no one partition scheme can satisfy all cases • Some files will always be requested more than others • Locality is of primary concern, distribution ignored • Hope that partitioning achieves the distribution

  15. Request Distribution Schemes (cont’d) • Client Based Request Distribution

  16. Request Distribution Schemes (cont’d) • Client Based Request Distribution (cont’d) • Also locality based • Partition client-space and assign a partition to each back-end server • Advantages and disadvantages similar to file-based • Difficult to find ideal partitioning scheme • Ignores distribution

  17. Request Distribution Schemes (cont’d) • Client Based Request Distribution (cont’d) • Slightly modified from DNS used in Internet • Allows flexibility in client-server mapping • TTL set during first resolution • On expiration, client expected to re-resolve name • Possibly different TTL could be used for different workload characteristics • However, clients ignore TTL • Hence a STATIC scheme

  18. Request Distribution Schemes (cont’d) • Locality Aware Request Distribution [5] • Broadly based on file-based scheme • Addresses the issue of load balancing • Each file assigned a dynamic set of servers instead of just one server

  19. Request Distribution Schemes (cont’d) • LARD (cont’d) • Technique • On first request for a file, assign least loaded back-end • On subsequent requests for the same file • Determine Max/Min loaded servers in assigned set • If (Max loaded server > High Threshold OR a server exists in cluster with load < Low Threshold ) then add the new least loaded server to set and assign to service request • Else assign Min loaded server in set to service request • If any server in set inactive > time T, remove from set

  20. Request Distribution Schemes (cont’d) • LARD (cont’d) • File-space partitioning done on the fly • Disadvantages • Large amounts of processing needs to be performed by the front-end • Large amount of memory needed to maintain information on each individual file • Possible bottleneck as system is scaled

  21. Request Distribution Schemes (cont’d) • Dynamic Client Based Request Distribution • Based on the premise that file reuse among clients is high • Complete ignorance of server loads • Propose a modification to the static client based distribution to make it actively modify distribution based on back-end loads.

  22. Request Distribution Schemes (cont’d) • Dynamic Client Based (cont’d) • Use of time-to-live (TTL) for server mappings within cluster - TTL is continuously variable • In heavily loaded systems • RR type distribution preferable as queue times predominate • TTL values should be small • In lightly loaded systems • TTL values should be large in order to maximize benefits of locality

  23. Request Distribution Schemes (cont’d) • Dynamic Client Based (cont’d) • On TTL expiration, assign client partition to least loaded back-end server in cluster • If more than one server has the same low load - choose randomly from that set • Allows server using an IPRP[4] type protocol to redirect client to other server if it aids load balancing • Unlike DNS, clients cannot void this mechanism • Hence - Dynamic

  24. Request Distribution Schemes (cont’d) • Dynamic Client Based (cont’d) • Trend in server load essential to determine if TTL is to be increased or decreased • Need to average out the requests to smooth out transient activity • Moving Window Averaging Scheme • Only requests that come within the window period actively contribute towards load calculation

  25. Simulation Model • Trace Driven simulation model • Based on CSIM [8] • Modelled an IBM OS/2 for various hardware parameters • Several parameters could be modified • # of servers, memory size, CPU capacity in MIPS (50), disk access times, Network communication time/packet, data organization - disk mirror or stripe

  26. Simulation Model (cont’d) • In disk mirror and disk striping, data cached at request servicing nodes • In disk striping, data is also cached at disk-end nodes

  27. Simulation Model (cont’d) • Traces • Representative of two arena where clustered servers are currently used • World Wide Web (WWW) Servers • Network File (NFS) Servers

  28. Simulation Model (cont’d) • WEB Trace • ClarkNet WWW Server - ISP for Metro Baltimore - Washington DC area • Collected over a period of two weeks • Original trace had 3 million records • Weeded out non HTTP related records like CGI, ftp • Resulting trace had 1.4 million records • Over 90,000 clients • Over 24,000 files that had a total occupancy of slightly under 100 MBytes

  29. Simulation Model (cont’d) • WEB Trace (cont’d) • Records had timestamps with 1 second resolution • Did not accurately represent real manner of request arrivals • Requests that arrived in the same second were augmented with a randomly generated microsecond extension

  30. Simulation Model (cont’d) • NFS Trace • Obtained from Auspex [9] file server at UC Berkeley • Consists of post client-cache misses • Collected over a period of one week • Had 231 clients, over 68,000 files that had a total occupancy of 1,292 Mbytes

  31. Simulation Model (cont’d) • NFS Trace (cont’d) • Original trace had a large amount of backup data at night and over weekends, only daytime records used in simulation • Records had timestamps with microsecond resolution • Cache allowed to WARM-UP prior to any measurements being made

  32. Results - Effects of Memory Size • NFS Trace,Disk Stripe • Increase mem = increase cache space Response time for 4 back-end servers.

  33. Results - Effects of Memory Size • NFS trace, Disk Stripe • FB better at extracting locality • RR hits are purely probabilistic Cache-hit ratio for 4 back-end servers.

  34. Results - Effects of Memory Size • WEB trace, Disk Stripe • WEB trace has a smaller working set • Increase in memory as less of an effect Response time for 4 back-end servers.

  35. Results - Effects of Memory Size • WEB trace, Disk Stripe • Extremely high hit rates, even at 32 Mbytes • FB able to extract maximum locality • Distribution scheme less of an effect on response time • Load distribution was acceptable for all schemes, best RR , worst FB Cache hit rates for 4 back-end system.

  36. Results - Effects of Memory Size • WEB Trace,Disk Mirror • Very similar to DS • With smaller memory, hit rates slightly lower as no disk end caching Disk Stripe Disk Mirror Disk stripe vs. disk mirror.

  37. Results - Scalability Performance • NFS trace, Disk Stripe • RR shows least benefit • Due to probabilistic cache hits Number of servers on response time (128MB memory).

  38. Results - Scalability Performance • NFS Trace,Disk Stripe • ROUND ROBIN • Drop in hit rates with more servers • Lesser “probabilistic” locality Cache hit rate vs. memorysize and number of back-end servers

  39. Results - Scalability Performance • NFS Trace,Disk Mirror • RR performance worsens with more servers • All other schemes perform similar to Disk Striping Number of servers on response time (128MB).

  40. Results - Scalability Performance • NFS Trace,Disk Mirror • For RR, lower hit rates with more servers - lower response time • For RR, disk-end caching offers better hit rates in disk striping than in disk mirror Disk Stripe Disk Mirror Cache hit rates for RR under Disk striping vs. mirroring (128MB)

  41. Results - Effects of Memory Size • NFS trace, Disk Mirror • Similar effect of more memory • Stagnation of hit rates in FB, DM does better than DS due to caching of data at disk end • RR exhibits better hit rates with DS than DM, greater variety of files in cache Cache hit rates with disk mirror and disk striping.

  42. Results - Disk Stripe Vs Disk Mirror • Implicit distribution of load in Disk striping produces low disk queues Disk Stripe Disk Mirror Queueing time in Disk stripe and disk mirror. NFS trace with a 4 back-end system used.

  43. Conclusion & Future Work • RR ideal distribution, poor response rates due to probabilistic nature of cache hit rates. • File -based was the best at extracting locality, complete lack of server loads, poor load distribution • LARD, similar to FB but better load distribution • For WEB Trace, cache hit rates were so high that distribution did not play a role in determining response time

  44. Conclusion & Future Work • Dynamic CB addressed the problem of server load ignorance of static CB, better distribution in NFS trace, better hit rates in WEB Trace • Disk Striping distributed requests over several servers, relieved disk queues but increased server queues • In the process of evaluating a flexible caching approach with Round Robin distribution that can exploit the file-based caching methodology • Throughput comparisons of various policies • Impact of faster processors • Impact of Dynamically generated web page content

More Related