world telecommunication congress 2010 network service management reliability september 13 14 n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
World Telecommunication Congress 2010 Network & Service Management Reliability September 13-14 PowerPoint Presentation
Download Presentation
World Telecommunication Congress 2010 Network & Service Management Reliability September 13-14

Loading in 2 Seconds...

  share
play fullscreen
1 / 46
colin

World Telecommunication Congress 2010 Network & Service Management Reliability September 13-14 - PowerPoint PPT Presentation

108 Views
Download Presentation
World Telecommunication Congress 2010 Network & Service Management Reliability September 13-14
An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Peter Danielis, M. Gotzmann, D. Timmermann University of Rostock, Germany Institute of Applied Microelectronics and Computer Engineering A P2P-based Storage Platform for Storing Session Data in Internet Access Networks World TelecommunicationCongress 2010 Network & Service Management ReliabilitySeptember 13-14 T. Bahls, D. Duchow Nokia Siemens Networks Broadband Access Division Greifswald, Germany

  2. Outline • Introduction & Motivation • Utilizationof P2P Technology • Erasure Resilient Codes for High Data Availability • Realizationofthe P2P-based Storage Platform • Summary

  3. Introduction & Motivation • Internet Service Providers (ISPs) provide Internet access  Access nodes (ANs) = essential networkelements • E.g., DSLAMs (Digital Subscriber Line Access Multiplexers)

  4. Introduction & Motivation  Access nodes (ANs) = essential networkelements • ANs havetobe powerful but well-priced ANs ≠servers! • Budget withavailableresources! $ $ $ $

  5. Introduction & Motivation  Access nodes (ANs) = essential networkelements • ANs needresets (ormayfail)  data must not be lost! • AN configurationdataneedstobesavedpersistently! • But there‘smore…

  6. Introduction & Motivation • Data - calledsessiondata - … • … comprises MAC/IP addresses, IP lease timesofcustomers • … isrequiredfordataforwarding/trafficfiltering DHCP Request: I have MAC address 00-50-04-E1-15-A0! DHCP Response: Your IP addressis 139.30.201.254 for60 min! MAC address: 00-50-04-E1-15-A0 IP address: 139.30.201.254 Lease Time: 60 min Active: No

  7. Introduction & Motivation • Data - calledsessiondata - … • … comprises MAC/IP addresses, IP lease timesofcustomers • … isrequiredfordataforwarding/trafficfiltering • … hastobealwaysavailable  persistent storageneeded • … ishighly volatile due tocontinouschanges DHCP Request: I have MAC address 00-50-04-E1-15-A0! DHCP Response: Your IP addressis 139.30.201.254 for 60 min! MAC address: 00-50-04-E1-15-A0 IP address: 139.30.201.254 Lease Time: 60 min Active: Yes

  8. Introduction & Motivation • Today: ANs storesessiondata in persistentflashmemory • Problem: Flash memory limited availability/rewritability • ISPs „sacrifice“ flashmemoryforsessiondatareluctantly

  9. Introduction & Motivation • Today: ANs storesessiondata in persistentflashmemory • Problem: Flash memory limited availability/rewritability • Solution: Useavailablevolatile RAM resourcesof ANs!

  10. Introduction & Motivation • Average AN, e.g., PowerQuicc III (Freescale Semiconductor) • RAM capacity = 1 Gbyte + unlimitedrewritability

  11. Introduction & Motivation • Average AN, e.g., PowerQuicc III (Freescale Semiconductor) • Calculatingcapacity = 1234 Dhrystone MIPS

  12. Introduction & Motivation • Average AN, e.g., PowerQuicc III (Freescale Semiconductor) • Calculatingcapacity = 1234 Dhrystone MIPS

  13. Introduction & Motivation • Average AN, e.g., PowerQuicc III (Freescale Semiconductor) • Problem: Howtoefficientlyutilizeavailableresources?

  14. Outline • Introduction & Motivation • Utilizationof P2P Technology • Erasure Resilient Codes for High Data Availability • Realizationofthe P2P-based Storage Platform • Summary

  15. Whatoptionsdoes P2P offer? • ...beyond the incriminated applications, of course. • New networking paradigm • No clients and servers anymore

  16. Whatoptionsdoes P2P offer? • ...beyond the incriminated applications, of course. • New networking paradigm • No clients and servers anymore • All peers form a self-organizing network • Network = storage resource • Network = computing resource • Scalability and resilience = intrinsic features • Proven concept (BitTorrent, Zattoo, Joost)

  17. Utilizationof P2P technology • Networking paradigm • Each AN ispartof a logical P2P overlay on itsuplink • Network = Storage Resource • Each AN stores just a pieceofsessiondata • Network = Computing Resource • Each AN implements P2P protocol • But ANs maybecomeunavailable… • Problem: Howtoensurehighdataavailability? Storage Capacity ofANs

  18. Outline • Introduction & Motivation • Utilizationof P2P Technology • ErasureResilientCodes (ERCs) forHigh Data Availability • Realizationofthe P2P-based Storage Platform • Summary

  19. ERCs for High Data Availability • Objective: High sessiondataavailability = 99.999 % • Simple replicationwastesmemoryressources  Reed-Solomon Codes • Split sessiondataofeach AN into m datachunks

  20. ERCs for High Data Availability • Objective: High sessiondataavailability = 99.999 % • Simple replicationwastesmemoryressources  Reed-Solomon Codes • Split sessiondataofeach AN into m datachunks • Encoding: Add k interleavedcodingchunks n=m+kchunks

  21. ERCs for High Data Availability • Objective: High sessiondataavailability = 99.999 % • Simple replicationwastesmemoryressources  Reed-Solomon Codes • Split sessiondataofeach AN into m datachunks • Encoding: Add k interleavedcodingchunks n=m+kchunks • Decoding: Restoresessiondatafromany m of n chunks

  22. Outline • Introduction & Motivation • Utilizationof P2P Technology • ErasureResilient Codes for High Data Availability • Realizationofthe P2P-based Storage Platform • Summary

  23. Kad-based Realization

  24. Kad-based Realization • Connection ofaccessnodes (ANs) with P2P-based overlay

  25. Kad-based Realization • Connection ofaccessnodes (ANs) with P2P-based overlay • P2P protocol: Kad-based Distributed Hash Table (DHT) ring

  26. Kad-based Realization • Connection ofaccessnodes (ANs) with P2P-based overlay • P2P protocol: Kad-based Distributed Hash Table (DHT) ring • Structured chunkstorage via DHT ring • Assignmentofhashvaluesto ANs andsessiondatachunks • ANs save sessiondatachunkswithsimilarhashvalues

  27. Kad-based Realization • Connection ofaccessnodes (ANs) with P2P-based overlay • P2P protocol: Kad-based Distributed Hash Table (DHT) ring • Structured chunkstorage via DHT ring • Assignmentofhashvaluesto ANs andsessiondatachunks • ANs save sessiondatachunkswithsimilarhashvalues Admin

  28. Block Diagram • The maincomponentsare… DHCP Server

  29. Block Diagram • (1) module with controlling functionality Save Session Data! DHCP Server 1 Time to Save Session Data!

  30. Block Diagram • (2) memory with own session data DHCP Server 2

  31. Block Diagram • (3) Kad block with ERC functionality DHCP Server 3

  32. Block Diagram • (4) routing table DHCP Server 4

  33. Block Diagram • (5) memory with session data chunks of other nodes DHCP Server 5

  34. Outline • Introduction & Motivation • Utilizationof P2P Technology • ErasureResilient Codes for High Data Availability • Realizationofthe P2P-based Storage Platform • Summary

  35. Summary • Successfuldevelopmentof P2P-based storageplatform • Utilizationoffree RAM insteadofrarelyavailableflashmemory • Connection ofaccessnodesby P2P overlay • High scalabilityandresiliencetowardsnetworkerrors • Efficientsharingof RAM andcomputingresources • ERCs forhighdataavailability & lowredundandy • Completionoffullyfunctional prototype

  36. Thankyou! Anyquestions? peter.danielis@uni-rostock.dehttp://www.imd.uni-rostock.de/networking

  37. Backup: Related Work J. Kubiatowicz et. al., “Oceanstore: An architecture for global-scale persistent storage”, 2000 Schwarz, Xin, Miller, “Availability in Global Peer-To-Peer Storage Systems”, 2004 Sattler, Hauswirth, Schmidt, „UniStore: Querying a DHT-based Universal Storage“, 2007 Morariu, „DIPStorage: Distributed Storage of IP Flow Records“, 2008

  38. Backup: Kad-based DHT Kad (eMule): 128 bitaddressspace Distancesbetweenhashvaluesarecalculatedbythe XOR metric

  39. Backup: Kad Routing Table • Binary tree with XOR distances of other peers to itself • Organized into k-buckets • Each peer knows many close peers • Each peer knows only few distant peers • Each peer has a life time

  40. Backup: Kad Bootstrapping & Maintenance • Bootstrapping • New peer contacts a known peer and inserts itself on ring • Maintenance • Contact peers from routing table with expired life time • Contact other peers periodically to learn new contacts

  41. Backup: Kad Lookup Process • Searching peer selects peers close to target These peersarecontacted via a request Somerespondwithnewpeers 41

  42. Backup: Kad Lookup Process • Some of the new peers are contacted • Some of them respond 42

  43. Backup: Kad Lookup Process • Respondingpeerswithin a definedsearchtolerance • Action request: Execute theaction! • Ifthey send an actionresponse, a counterisincreased • Ifcounter==definedvalue, thelookupterminates • Otherwise, itisterminated via a timeout

  44. Backup: Prototype

  45. Backup: RelatedIssues • Benefitfromusing ERCs insteadofdatareplication • Moderate quantitativememorysavings • But significantlyhigherdataavailability • Kadnetwork: open sourceishighquality! • Minimal trafficoverheadintroducedbyKadmaintenance

  46. Backup: Memory requirements & performance • Currently, prototype is ported to a Xilinx FPGA board • Long-time test/simuof prototype at our institute intended • Functional verification • Determination of performance • Determination of memory requirements • Determination of CPU utilization