providing secure storage on the internet
Download
Skip this Video
Download Presentation
Providing Secure Storage on the Internet

Loading in 2 Seconds...

play fullscreen
1 / 39

Providing Secure Storage on the Internet - PowerPoint PPT Presentation


  • 95 Views
  • Uploaded on

Providing Secure Storage on the Internet. Barbara Liskov & Rodrigo Rodrigues MIT CSAIL April 2005. Internet Services. Store critical state Are attractive targets for attacks Must continue to function correctly in spite of attacks and failures. Replication Protocols.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Providing Secure Storage on the Internet' - aulii


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
providing secure storage on the internet

Providing Secure Storage on the Internet

Barbara Liskov & Rodrigo Rodrigues

MIT CSAIL

April 2005

internet services
Internet Services
  • Store critical state
  • Are attractive targets for attacks
  • Must continue to function correctly in spite of attacks and failures
replication protocols
Replication Protocols
  • Allow continued service in spite of failures
    • Failstop failures
    • Byzantine failures
  • Byzantine failures really happen!
    • Malicious attacks
internet services 2
Internet Services 2
  • Very large scale
    • Amount of state
    • Number of users
    • Implies lots of servers
  • Must be dynamic
    • System membership changes over time
bft ls
BFT-LS
  • Provide support for Internet services
    • Highly available and reliable
    • Very large scale
    • Changing membership
  • Automatic reconfiguration
    • Avoid operator errors
  • Extending replication protocols
outline
Outline
  • Application structure
  • MS specification
  • MS implementation
  • Application methodology
  • Performance and analysis
system model

C

C

C

C

S

S

S

S

S

S

Unreliable

Network

System Model
  • Many servers, clients
  • Service state is partitioned among servers
  • Each “item” has a replica group
  • Example applications: file systems, databases
the membership service ms
The Membership Service (MS)
  • Reconfigures automatically to reduce operator errors
  • Provides accurate membership information that nodes can agree on
  • Ensures clients are up-to-date
  • Works at large scale
system runs in epochs
System runs in Epochs
  • Periods of time, e.g., 6 hours
  • Membership is static during an epoch
  • During epoch e, MS computes membership for epoch e+1
  • Epoch duration is a system parameter
  • No more than f failures in any replica group while it is useful
server ids
Server IDs
  • Ids chosen by MS
  • Consistent hashing
  • Very large circular id space
membership operations
Membership Operations
  • Insert and delete node
  • Admission control
    • Trusted authority produces a certificate
  • Insert certificate includes
    • ip address, public key, random number, and epoch range
    • MS assigns the node id ( h(ip,k,n) )
monitoring
Monitoring
  • MS monitors the servers
    • Sends probes (containing nonces)
    • Some responses must be signed
  • Delayed response to failures
  • Timing of probes, number of missed probes, are system parameters
  • BF nodes (code attestation)
ending epochs
Ending Epochs
  • Stop epoch after fixed time
  • Compute the next configuration:

Epoch number

Adds and Deletes

  • Sign it
    • MS has a well known public key
  • Propagated to all nodes
    • Over a tree plus gossip
guaranteeing freshness

C

MS

Guaranteeing Freshness
  • Clients sends a challenge to MS
  • Response gives client a time periodT during which it may execute requests
  • T is calculated using client clock

<nonce>

<nonce, epoch #>σMS

implementing the ms
Implementing the MS
  • At a single dedicated node
    • Single point of failure
  • At a group of 3f+1
    • Running BFT
    • No more than f failures in system lifetime
  • At the servers themselves
    • Reconfiguring the MS
system architecture
System Architecture
  • All nodes run application
  • 3F+1 run the MS
implementation issues
Implementation Issues
  • Nodes run BFT
    • State machine replication (e.g., add, delete)
  • Decision making
  • Choosing MS membership
  • Signing
decision making
Decision Making
  • Each replica probes independently
  • Removing a node requires agreement
    • One replica proposes
    • 2F+1 must agree
    • Then can run the delete operation
  • Ending an epoch is similar
moving the ms
Moving the MS
  • Needed to handle MS node failures
  • To reduce attack opportunity
    • Move must be unpredictable
  • Secure multi-party coin toss
  • Next replicas are h(c,1), …, h(c,3F+1)
signing
Signing
  • Configuration must be signed
  • There is a well-known public key
  • Proactive secret sharing
  • MS replicas have shares of private key
    • F+1 shares needed to sign
  • Keys are re-shared when MS moves
changing epochs summary of steps
Changing Epochs: Summary of Steps
  • Run the endEpoch operation on state machine
  • Select new MS replicas
  • Share refreshment
  • Sign new configuration
  • Discard old shares
example service
Example Service
  • Any replicated service
  • Dynamic Byzantine Quorums dBQS
    • Read/Write interface to objects
  • Two kinds of objects
    • Mutable public-key objects
    • Immutable content-hash objects
dbqs object placement
dBQS Object Placement
  • Consistent hashing
  • 3f+1 successors of object id are responsible for the object

14

16

byzantine quorum operations
Byzantine Quorum Operations
  • Public-key objects contain
    • State, signature, version number
  • Quorum is 2f+1 replicas
  • Write:
    • Phase 1: client reads to learn highest v#
    • Phase 2: client writes to higher v#
  • Read:
    • Phase 1: client gets value with highest v#
    • Phase 2: write-back if some replicas have a smaller v#
dbqs algorithms dynamic case
dBQS Algorithms – Dynamic Case
  • Tag all messages with epoch numbers
  • Servers reject requests for wrong epoch
  • Clients execute phases entirely in an epoch
    • Must be holding a valid challenge response
  • Servers upgrade to new configuration
    • If needed, perform state transfer from old group
  • A methodology
evaluation
Evaluation
  • Implemented MS, two example services
  • Ran set of experiments on PlanetLab, RON, local area
ms scalability
MS Scalability
  • Probes – use sub-committees
  • Leases – use aggregation
  • Configuration distribution
    • Use diffs and distribution trees
time to reconfigure
Time to reconfigure
  • Time to reconfigure is small
  • Variability stems from PlanetLab nodes
  • Only used F = 1, limitation of APSS protocol
failure free computation
Failure-free Computation
  • Depends on no more than F failures while group is useful
  • How likely is this?
conclusion
Conclusion
  • Providing support for Internet services
  • Scalable membership service
    • Reconfiguring the MS
  • Dynamic replication algorithms
    • dBQS – a methodology
  • Future research
    • Proactive secret sharing
    • Scalable applications
providing secure storage on the internet1

Providing Secure Storage on the Internet

Barbara Liskov and Rodrigo Rodrigues

MIT CSAIL

April 2005

ad