1 / 10

Locality Optimizations in Tapestry

Locality Optimizations in Tapestry. Jeremy Stribling Joint work with : Kris Hildrum Ben Y. Zhao Anthony D. Joseph John D. Kubiatowicz Sahara/OceanStore Winter Retreat January 14, 2003. Berkeley. MIT. Object Location in Tapestry. UW. Berkeley. MIT. Is This Always Optimal?. UW.

jalia
Download Presentation

Locality Optimizations in Tapestry

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Locality Optimizations in Tapestry Jeremy Stribling Joint work with: Kris Hildrum Ben Y. Zhao Anthony D. Joseph John D. Kubiatowicz Sahara/OceanStore Winter Retreat January 14, 2003

  2. Berkeley MIT Object Location in Tapestry UW

  3. Berkeley MIT Is This Always Optimal? UW

  4. Why Is This a Problem? • Example Application: OceanStore web caching • If a nearby replica exists, we must find it quickly • Measure of Locality: Relative Delay Penalty (RDP) • The ratio of the distance of an object in Tapestry to the minimum possible distance (i.e. over IP) • Problem: finding nearby objects incurs a high RDP • Two extra hops have a huge relative impact if object is close • An issue for all similar systems, not just Tapestry • Solution: trade storage overhead for low RDP

  5. MIT Optimization 1: Publish to Backups • Redundancy: Routing table entries store up to c nodes • Closest node is the primary neighbor, c–1 nodes are backups • A simple optimization: publish to k backups • Limit to the first n hops of the publish path • Result • Nodes near the object more likely to encounter pointers while routing to the root • Storage overhead: k*n additional pointers per object

  6. Optimization 1: Publish to Backups Experiments run in simulation on a PlanetLab-based topology

  7. Berkeley MIT Optimization 2: Local Misroute UW

  8. Optimization 2: Local Misroute • Solution: Before taking “long” hop, misroute to closer node • Look a little harder in the local area before leaving • When publishing, place a pointer on local surrogate • Issue: What determines a “long” hop? • One metric: if next hop is more than m times longer than last hop, consider it “long” • Call m the threshold factor

  9. Optimization 2: Local Misroute Experiments run in simulation on a transit-stub topology “Using sim knowledge” indicates direct use of the topology file

  10. Future Work • Analyze more locality optimizations and different parameter configurations • Take measurements on PlanetLab • Test optimizations with real workloads (i.e. web caching) • Complete cost analysis of storage overhead vs. RDP benefit across all optimizations

More Related