1 / 22

Gisik Kwon Nov. 18, 2002

< 1 > A Peer-to-Peer Approach to Resource Discovery in Grid Environments (in HPDC’02, by U of Chicago). Gisik Kwon Nov. 18, 2002. Motivation. Two general resource discovery systems, Grid & P2P, eventually will have the same goal

faye
Download Presentation

Gisik Kwon Nov. 18, 2002

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. < 1 >A Peer-to-Peer Approach to Resource Discovery in Grid Environments(in HPDC’02, by U of Chicago) Gisik Kwon Nov. 18, 2002

  2. Motivation • Two general resource discovery systems, Grid & P2P, eventually will have the same goal • Large-scale, decentralized and self-configuring system with complex functionalities • So, general guideline is needed for designing the resource discovery system • Proposing 4 axes(components) guiding the design of any resource discovery architecture • Presenting an emulated environment and preliminary performance evaluation with simulations

  3. How • 4 axes (components) to be considered • Membership protocol • How new nodes join and learn the network • Overlay construction function • Selecting the set of active collaborators from the local membership list • Preprocessing • Off-line preparations for better search performance • Caching (X), pre-fetching (O) • E.g) Dissemination of resource descriptions • Request processing • Local processing • lookup the requested resource in the local info., processing aggregated resources, .. • Remote processing • Request propagation rule

  4. Evaluation • Modeling the Grid environment (4 parameters) • Resource info. distribution and density • Some organizations : sharing large number of resources or just a few • Some resources : Common or unique • Resource info. dynamism • Highly variable(CPU load) or static(type of CPU) • Requests distribution • Pattern of Users’ requests • E.g) Zipf distribution, uniform, .. • Peer particapation • varies over time more significantly in P2P than Grid

  5. Evaluation • Preliminary Experimental Setup • Optimistic Grid model • Static resource attributes, constant peer participation, no failure • Passive membership protocol • A new node learns the network through out-of-band • When a new node contacts, membership is enriched • Overlay function accepts unlimited number of neighbors • No preprocessing • Request processing • Perfect matching • 4 request propagation strategies

  6. Evaluation • Random walk • Choosing randomly • learning-based • Answered similar requests previously • best-neighbor • Answered the largest number of requests • learning-based + best-neighbor • Learning-based first, otherwise best-neighbor

  7. Evaluation • User requests : Resource distribution

  8. Quantitative estimation of resource location costs

  9. Quantitative estimation of resource location costs

  10. Grid vs P2P

  11. < 2 >Adaptive Replication in Peer-to-Peer Systems(in ICS’02, by UMD) Gisik Kwon Nov. 18, 2002

  12. Motivation • Recent P2P systems are good for the uniform query demand • But the demand can be heavily skewed • So lightweight, adaptive & system-neutral replication mechanism is needed to control the skewed demand • Proposing LAR • Evaluation on Chord and TerraDir systems with simulation

  13. How • LAR uses two soft states: caches and replicas • Why soft: Created and destroyed by local decision, without any coordination with the item’s home • Caches • Consist of <data item label, item’s home, home’s address(IP), a set of known replica locations> • Use LRU replacement strategy • Replicas • Contains item data itself and • More states: home address(IP), neighbor of home, known replicas • Should be advertised

  14. How • Load balancing with creating replicas • Each time a packet is routed, current load(li) of server(si) is checked • Load is defined in terms of messages sent to a server during a time unit • If li > limax (overloaded) • si creates replica at sj (if li > lj) • If (lilow <= li <= lihi) (highly loaded) • si creates replica at sj (only if lj <= ljlow) • After creating replica, it is disseminated • 2/32 policy : 2 per msg, 32 per server

  15. Evaluation • Based on Chord simulator • Single network hop = 25ms • Query distribution • follows the poisson distribution • Average input rate = 500/sec • def. skewed input : 90-1 • 90% skewed to one item, other 10% randomly distributed • 1k servers, 32k data items • lmax = 10/sec, lhi = 0.75*lmax, llow = 0.3*lmax • Queue length = 32 • Def. Load window size = 2 sec • Dissemination policy = 2/32

  16. Static vs. adaptive replication

  17. Static vs. adaptive replication

  18. Load balancing • lhi = 0.75

  19. Parameter sensitivity

  20. scalability

  21. N70’s Finger Table Chord Lookup 71..71 N79 72..73 N79 74..77 N79 78..85 N80 86..101 N102 102..5 N102 6..69 N32 (0) N113 N32’s Finger Table N102 33..33 N40 34..35 N40 36..39 N40 40..47 N40 48..63 N52 64..95 N70 96..31 N102 N80’s Finger Table N32 N85 81..81 N85 82..83 N85 84..87 N85 88..95 N102 96..111 N102 112..15 N113 16..79 N32 N40 N80 N52 N79 N70 N60 Node 32, lookup(82): 32  70  80  85.

  22. TerraDir

More Related