1 / 56

Evaluation of Algorithms for the List Update Problem

Evaluation of Algorithms for the List Update Problem. Suporn Pongnumkul R. Ravi Kedar Dhamdhere. Overview. List Update Problem Competitive Analysis Average Case Analysis Our Hybrid Model Setup of our experiment Results from our experiment Conjecture. L:. y. w. z. x. v. u.

keaton
Download Presentation

Evaluation of Algorithms for the List Update Problem

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluation of Algorithms for the List Update Problem Suporn Pongnumkul R. Ravi Kedar Dhamdhere

  2. Overview • List Update Problem • Competitive Analysis • Average Case Analysis • Our Hybrid Model • Setup of our experiment • Results from our experiment • Conjecture

  3. L: y w z x v u List Update Problem • Self-organizing sequential search • Unsorted list • Received a sequence of requests • Cost of accessing the ith element of the list is i.

  4. L: y w z x v u List Update Example

  5. L: y w z x v u List Update Example

  6. L: y w z x v u List Update Example

  7. L: y w z x v u List Update Example

  8. L: y w z x v u List Update Example

  9. L: y w z x v u List Update Example

  10. L: y v w z x u List Update Example

  11. L: y v w z x u List Update Example

  12. L: y v w z x u List Update Example

  13. L: y v w z x u List Update Example

  14. L: y v w z x u List Update Example

  15. L: y v z w x u List Update Example

  16. L: y v w z x u List Update Example

  17. Competitive Analysis • Definition: An analysis in which the performance of an online algorithm is compared to the best that could have been achieved if all the inputs had been known in advance.

  18. A: Our online algorithm CA() OPT: Optimal Offline algorithm COPT() A is c-competitive if a CA() ≤ c COPT() + a for all request sequences  Competitive Ratio

  19. Move-to-Front (MTF) • When an element is accessed, move it to the front of the list. • Theorem: [Sleator, Tarjan, 1985] MTF has competitive ratio 2 against optimal offline algorithm.

  20. Average Case Analysis • Assume each request comes from a fixed probability distribution, independent of previous requests. Suppose the ith item has probability pi. Design algorithms to minimize the expected cost. • Optimal strategy is to keep the list sorted in non-increasing order of pi.

  21. STAT = Static List • List is sorted in non-increasing order of the probabilities • Never moves anything • Good for when we have a good estimate of the probability distribution.

  22. Need a new model? • Most real-world settings don't behave either like a discrete distribution, or like a worst-case one. • Can we design an algorithm that does well in both typical and worst-case? • How could we analyze such algorithms?

  23. Hybrid Model • Assume a fixed probability distribution, • For each request, with probability , let the adversary change the request. •  Average Case Analysis •  Competitive Analysis •  Known probability distribution with uncertainty.

  24. Hybrid Algorithm? • Parameterized by  • Matches best average case performance when  is low, and matches best competitive ratio when  is high.

  25. Move-From-Back-Epsilon • List initially sorted in non-increasing order of probabilities. • When an element x is accessed, promote it past others that have probability up to px + .

  26. L: y w z x v u MFBE Example ( = 0.2) Prob py = 0.5 pw = 0.2 pz = 0.15 px = 0.1 pv = 0.03 pu = 0.02 :v y z y

  27. L: y w z x v u MFBE Example ( = 0.2) Prob py = 0.5 pw = 0.2 pz = 0.15 px = 0.1pv = 0.03 pu = 0.02 :x v y v

  28. L: y w z x v u MFBE Example ( = 0.2) Prob py = 0.5 pw = 0.2 pz = 0.15 px = 0.1pv = 0.03 pu = 0.02 :x v y v

  29. L: y x w z v u MFBE Example ( = 0.2) Prob py = 0.5 px = 0.1pw = 0.2 pz = 0.15 pv = 0.03 pu = 0.02 :x v y v

  30. L: y x w z v u MFBE Example ( = 0.2) Prob py = 0.5 px = 0.1 pw = 0.2 pz = 0.15 pv = 0.03pu = 0.02 :x v y v

  31. L: y x w z v u MFBE Example ( = 0.2) Prob py = 0.5 px = 0.1 pw = 0.2 pz = 0.15 pv = 0.03pu = 0.02 :x v y v

  32. L: y v x w z u MFBE Example ( = 0.2) Prob py = 0.5 pv = 0.03px = 0.1 pw = 0.2 pz = 0.15 pu = 0.02 :x v y v

  33. L: y v x w z u MFBE Example ( = 0.2) Prob py = 0.5pv = 0.03 px = 0.1 pw = 0.2 pz = 0.15 pu = 0.02 :xv y v

  34. L: y v x w z u MFBE Example ( = 0.2) Prob py = 0.5pv = 0.03 px = 0.1 pw = 0.2 pz = 0.15 pu = 0.02 :xv y v

  35. L: y v x w z u MFBE Example ( = 0.2) Prob py = 0.5pv = 0.03 px = 0.1 pw = 0.2 pz = 0.15 pu = 0.02 :xv y v

  36. L: y v x w z u MFBE Example ( = 0.2) Prob py = 0.5 pv = 0.03px = 0.1 pw = 0.2 pz = 0.15 pu = 0.02 :xvy v

  37. L: y v x w z u MFBE Example ( = 0.2) Prob py = 0.5 pv = 0.03px = 0.1 pw = 0.2 pz = 0.15 pu = 0.02 :xvy v

  38. L: y v x w z u MFBE Example ( = 0.2) Prob py = 0.5 pv = 0.03px = 0.1 pw = 0.2 pz = 0.15 pu = 0.02 :xvy v

  39. Difficulties in Proofs • It’s not easy to understand the behavior of OPT. • OPT can be computed by Dynamic programming • Trivial way = O((n!)2m) • Improvement = O((2n)(n!)m) [Reingold, Westbrook, 1996]

  40. Our Experiment • Motivation: To see the behavior of algorithms in our hybrid model. • Measurement: We measure the performance of an online algorithm by the average competitive ratio.

  41. Our Experiment • Variables in our experiment • Type of List Update Algorithm (MTF, STAT, MFBE) • Type of Probability Distribution • Type of Adversary • Epsilon: 

  42. Our Experiment • We generate a request sequence of length 100, with a chosen probability distribution. • Then, with probability  let the adversary change the request sequence.

  43. Our Experiment • Record the cost incurred by the online algorithm = CostA() • Use Dynamic Programming to find optimum cost of that request sequence = CostOPT(). • Competitive Ratio = CostA()/CostOPT() • Repeat this 100 times to find the average competitive ratio.

  44. Distribution • Geometric Distribution: • P[i] / 1/2i • Uniform Distribution: • P[i] = 1/n, n = length of the list • Zipfian Distribution (Zipf(2)): • P[i] / 1/i2

  45. L: y w z x v u Cruel Adversary • This is an adaptive adversary • Looks at the current list and request for the last item in the list.

  46. Cruel Adversary, Geometric Distribution, n=6

  47. Cruel Adversary, Geometric Distribution, n=6

  48. Reversed Geometric Adversary • This adversary chooses elements randomly, according to the geometric distribution on the reversed STAT order. • This adversary requests small probability items more frequently. • Oblivious Adversary = doesn’t look at the current list

  49. Reversed Geometric Adversary, Geometric Distribution, n=6

  50. Uniform Adversary • This adversary requests elements from the list uniformly at random. • Oblivious Adversary • With this adversary, the sorted order of elements in the combined probability distribution doesn’t change.

More Related