1 / 39

PLUM Implementation in Open Computation Exchange and Auctioning Network (OCEAN)

This research paper discusses the implementation of PLUM, a peer list update manager, in OCEAN, a network for buying and selling dynamic distributed computing resources. It explores the architecture, functionality, and future works of PLUM in the OCEAN system.

garden
Download Presentation

PLUM Implementation in Open Computation Exchange and Auctioning Network (OCEAN)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Peer List Update Manager (PLUM) Implementation in Open Computation Exchange and Auctioning Network(OCEAN) Heeyong Park hpark@cise.ufl.edu http://www.cise.ufl.edu/~hpark 11/05/2001 Computer and Information Science and Engineering University of Florida

  2. Contents • Introduction • OCEAN • PLUM • Experimental Results • Future Works

  3. Introduction • Computation Market by Auctioning • Started as early as 1968 at Harvard to utilize PDP-1 • Currently SETI@Home, EconoGrid, Distributed.net • Java & XML • Java for Machine & Operating System-independent Programming Environment • XML for universal data format

  4. OCEAN • What is OCEAN? • Similar Projects • OCEAN System Architecture • OCEAN Node Architecture • Scenario

  5. What is OCEAN ? Open Computation Exchange and Auctioning Network (OCEAN) • A functional infrastructure supporting the automated, commercial buying and selling of dynamic distributed computing resources over the internet • A Distributed Computing Market

  6. Similar Projects • Non-profit projects • SETI@Home, Distributed.net, Grid • Venture Companies • United Devices • Entropia Distributed Computing • Parabon Computation • and Porivo

  7. Server node Server node Auction node Public Internet Auction node App. launch point Firewall/OCEAN proxy node Private App. launch point Intranet Server node Auctionnode OCEAN System Architecture

  8. Node Configuration/ Operation API Job Maker API Trader Component Auction Component Negotiation Component Task Spawning and Migration Component PLUM Component Local Accounting Component Security Component Persistent Object Manager Communication Component Naming Component Central Accounting Server JAVA VM and API Node Architecture

  9. CAS Create a account Auction Nego. Buyer Nego. Auction Nego. Nego. Try to make a deal Get the results Make a deal Try to make a deal Collect proposals Make a deal Collect proposals CAS CAS Save contract info. Save contract info. App.Launch port Customer APP. Execute the job Send job CAS CAS Transfer money to seller account Deposit money to seller account OCEAN Auctioning Scenario Buyer Seller Need computation resources Register Get an Account Buyer Trade Proposal Mode proposals Collect bids Negotiation …. Trans.Info Confirms deal Job migration Results Returns the result Pay Close

  10. We are Here • Introduction • OCEAN • PLUM • Simulation Results • Future Works

  11. PLUM • What is PLUM? • Approaches • Implementation Philosophy • PLUM Architecture • PLUM Functionalities • XML Schemas for Queries

  12. What is PLUM? PLUM ( Peer List Update Manager) • A component of OCEAN providing a reliable peers for auction service component • Updates the statues information of the peers

  13. Approaches • Centralized Server ( MP3.com ) • Pure Peer-to-Peer Server (Gnutella) • Distributed Server (Napster) • OCEAN Approach

  14. Centralized Server ( MP3.com ) • Advantages • Easy to Control • Easy to Maintain the Data Integration • Disadvantages • Bottleneck on Server Side • High Cost of Server Maintenance

  15. Pure Peer-to-Peer Server (Gnutella) • Advantages • No Server Maintenance Cost • Easy to Deploy • Disadvantages • Inefficient Communication • Hard to Control

  16. Distributed Server (Napster) • Advantages • Distribution of Traffic • Has Control • Disadvantages • Data Integration • Still cost Server Maintenance

  17. PLUM Approach • Pure Peer-to-Peer Approach & Central Accounting System(CAS) • Plus Peer Selection Algorithm

  18. Implementation Philosophy • Least User Involvement Requirement • Reliable Peer-List Provider • Easy to Integrate with Other Component • Easy to Configure the Components

  19. Least User Involvement • (Fully Automated Service) • A lot of Configuration Parameters

  20. Reliable Peer-List Provider • Persistent Object Storage • Currently just use the local file system • Effective Peer Selection Algorithm • Monitoring Dynamic Peer Status • Powerful Protocol for Peer-List Expand • Quantitative Formula for Peer Reliability

  21. Easy to Integrate with Other Component • Establishes XML-based Data Communication • Provides API Document • Object-Oriented Project Approach

  22. PLUM Architecture • PLUM in OCEAN • Interaction with Other Components • PLUM Layout Architecture

  23. PLUM in OCEAN Node Configuration/ Operation API Job Maker API Trader Component Auction Component Negotiation Component Task Spawning and Migration Component PLUM Component Local Accounting Component Security Component Persistent Object Manager Communication Component Naming Component Central Accounting Server JAVA VM and API

  24. Interaction with Other Components Node Operator Auction Negotiation Main Database Server Core PLUM DB Connector Rate System Persistent Object Manager Communication Security

  25. Remote PLUMs Central OCEAN Database Remote OCEAN NODES XML DOC XML DOC SQL Queries/Results OCEAN Node PLUM Component Query Manager XML Query Queue SQL Query Queue Core PLUM Configuration and manipulation of the peer list, Creation of queries Local Components Peer List POS Peer Peer Peer PLUM Component Architecture

  26. PLUM Functionalities • Discovering Resources • Monitoring Resources • Peer Selection Algorithm

  27. Discovering Resources • Goal • to find more promising peers • How to find the peers • Issuing queries to the local peers to expand • Advertising itself to the local peers

  28. Monitoring Resources • Goal • to update the peer status information • Checkpoints • Availability by Ping like protocol • Bandwidth by Send Test Query • Load (not implemented yet) • Transactions History by CAS

  29. Peer Selection Algorithm • Goal: • to select the peers based on the reliability formula • Reliability Formula • Static Information • #ST: Total Number of the Successful Transactions • #SST: Total Number of the Suspended Transactions • #FT: Total Number of the Failed Transactions • Weight: Transaction Cost ($) • Dynamic Information • Availability: The percentage of successful pings • Latency: Time delay for exchanging a data packet

  30. XML Schemas for Queries • Schema for the Peer List • Schema for the Requests • Schema for the Responses

  31. Schema for the Peer List

  32. Schema for the Requests • Expand • Query for Trust points • Advertise • Test Bandwidth

  33. Schema for the Responses • Expand • Test Bandwidth • Peer-Information Query Responses

  34. Pong Ping Send it back Check availability QM Saves the request doc into the waiting queue Peer Transfer Simulation Node B Node A A Packet for Ping QM Packet for response Wakes up the request thread and create XML doc for the expand request XML-type Expand Request Doc QM Response Periodically the response handler checks the waiting queue Periodically the response handler checks the waiting queue JDOM JDOM XML parser analyzes the response and creates a vector containing peer information XML parser analyzes the request and invokes the appropriate method and if necessary, it creates XML doc for the request XML-type Peer List Response Doc QM And call the proper method in PLUM core to handle the response QM Send the response document PLUM Save the vector into the local peer list

  35. Experimental Results • Thread Number in Ping • Goal: To show how the thread number effects on the general performance of Ping utility • How to emulate? • Single Port in Ping Server and multiple port in clients Ping Server T T T T T T

  36. Thread Number in Ping Target Node : 1000 OS: Sun Spark 5 Mem: 256M

  37. Max-Peer Transfer Number in Expand Query • Goal • To show how the maximum number of the peers PLUM can transfer to the expand request effects on the total performance of expanding the peer list • How to be simulate ? • By creating fake peer list and choose randomly choose peers to transfer

  38. Max-Peer Transfer Number in Expand Query Original Peer List Size: 50 # Node : 3

  39. Future Works • Naming and Directory Service • SOAP(Simple Object Access Protocol) • Lightweight Persistent Object Storage Service

More Related