1 / 18

OceanStore Status and Directions ROC/OceanStore Retreat 6/10/02

OceanStore Status and Directions ROC/OceanStore Retreat 6/10/02. John Kubiatowicz University of California at Berkeley. Everyone’s Data, One Utility. Millions of servers, billions of clients …. 1000-YEAR durability (excepting fall of society) Maintains Privacy, Access Control, Authenticity

blanca
Download Presentation

OceanStore Status and Directions ROC/OceanStore Retreat 6/10/02

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. OceanStoreStatus and DirectionsROC/OceanStore Retreat 6/10/02 John Kubiatowicz University of California at Berkeley

  2. Everyone’s Data, One Utility • Millions of servers, billions of clients …. • 1000-YEAR durability (excepting fall of society) • Maintains Privacy, Access Control, Authenticity • Incrementally Scalable (“Evolvable”) • Self Maintaining! • Not quite peer-to-peer: • Utilizing servers in infrastructure • Some computational nodes more equal than others

  3. Inner-Ring Servers Second-Tier Caches Clients Multicast trees The Path of an OceanStore Update

  4. Big Push: OSDI • We analyzed and tuned the write path • Many different bottlenecks and bugs found • Currently committing data and archiving it at about 3-5 Mb/sec

  5. Big Push: OSDI • Stabilized basic OceanStore code base • Interesting issues: • Cryptography in critical path • Fragment generation/SHA-1 limiting archival throughput at the moment • Signatures are problem for inner ring • (although – Sean will tell you about cute batching trick) • Second-tier can shield inner ring • Actually shown this with Flash-crowd-like benchmark • Berkeley DB has max limit approx 10mb/sec • Buffer cache layer can’t meet that

  6. OceanStore Goes Global! • OceanStore components running “globally:” • Australia, Georgia, Washington, Texas, Boston • Able to run the Andrew File-System benchmark with inner ring spread throughout US • Interface: NFS on OceanStore • Word on the street: it was easy to do • The components were debugged locally • Easily set up remotely • I am currently talking with people in: • England, Maryland, Minnesota, …. • Intel P2P testbed will give us access to much more

  7. Inner Ring • Running Byzantine ring from Castro-Liskov • Elected “general” serializes requests • Proactive Threshold signatures • Permits the generation of single signature from Byzantine agreement process • Highly tuned cryptography (in C) • Batching of requests yields higher throughput • Delayed updates to archive • Batches archival ops for somewhat quiet periods • Currently getting approximately 5Mb/sec

  8. We have Throughput Graphs!(Sean will discuss)

  9. Self-Organizing second-tier • Have simple algorithms for placing replicas on nodes in the interior • Intuition: locality propertiesof Tapestry help select positionsfor replicas • Tapestry helps associateparents and childrento build multicast tree • Preliminary resultsshow that this is effective • We have tentative writes! • Allows local clients tosee data quickly

  10. Effectiveness of second tier

  11. Archival Layer • Initial implementation needed lots of tuning • Was getting 1Mb/sec coding throughput • Still lots of room to go: • A “C” version of fragmentation could get 26MB/s • SHA-1 evaluation expensive • Beginnings of online analysis of servers • Collection facility similar to web crawler • Exploring failure correlations for global web sites • Eventually used to help distribute fragments

  12. New Metric: FBLPY • No more discussion of 1034 years MTTF • Easier to understand?

  13. 3 4 2 NodeID 0x79FE NodeID 0x23FE NodeID 0x993E NodeID 0x43FE NodeID 0x43FE 1 4 NodeID 0x73FE NodeID 0x44FE 3 2 1 3 NodeID 0xF990 4 4 3 2 NodeID 0x035E NodeID 0x04FE 3 NodeID 0x13FE 4 NodeID 0x555E NodeID 0xABFE 2 NodeID 0x9990 3 1 2 1 2 3 NodeID 0x239E NodeID 0x73FF NodeID 0x1290 NodeID 0x423E 1 Basic Tapestry MeshIncremental suffix-based routing

  14. Dynamic Adaptation inTapestry • New algorithms for nearest-neighbor acquisition [SPAA ’02] • Massive parallel inserts with objects staying continuously available [SPAA ’02] • Deletes (voluntary and involuntary): [SPAA ’02] • Hierarchical objects search for mobility [MOBICOM submission] • Continuous adjustment of neighbor links to adapt to failure [ICNP] • Hierarchical routing (Brocade): [IPTPS’01]

  15. Reality: Web Caching through OceanStore

  16. Other Apps • This summer: Email through OceanStore • IMAP and POP proxies • Let normal mail clients access mailboxes in OS • Palm-pilot synchronization • Palm data base as an OceanStore DB • Better file system support • Windows IFS (Really!)

  17. Summer Work • Big push to get privacy aspects of OceanStore up and running • Big push for more apps • Big push for Introspective computing aspects • Continuous adaptation of network • Replica placement • Management/Recovery • Continuous Archival Repair • Big push for stability • Getting stable OceanStore running continuously • Over big distances • …

  18. For more info: • OceanStore vision paper for ASPLOS 2000 “OceanStore: An Architecture for Global-Scale Persistent Storage” • OceanStore paper on Maintenance (IEEE IC): “Maintenance-Free Global Data Storage” • SPAA paper on dynamic integration “Distributed Object Location in a Dynamic Network” • Both available on OceanStore web site: http://oceanstore.cs.berkeley.edu/

More Related