distributed programming for dummies
Download
Skip this Video
Download Presentation
Distributed Programming for Dummies

Loading in 2 Seconds...

play fullscreen
1 / 36

Distributed Programming for Dummies - PowerPoint PPT Presentation


  • 91 Views
  • Uploaded on

Distributed Programming for Dummies. A Shifting Transformation Technique Carole Delporte-Hallet, Hugues Fauconnier, Rachid Guerraoui, Bastian Pochon. Agenda. Motivation Failure patterns Interactive Consistency problem Transformation algorithm Performance Conclusions. Motivation.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Distributed Programming for Dummies' - hop-wells


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
distributed programming for dummies

Distributed Programming for Dummies

A Shifting Transformation Technique

Carole Delporte-Hallet, Hugues Fauconnier,

Rachid Guerraoui, Bastian Pochon

agenda
Agenda
  • Motivation
  • Failure patterns
  • Interactive Consistency problem
  • Transformation algorithm
  • Performance
  • Conclusions
motivation

Motivation

Distributed programming is not easy

motivation1
Motivation
  • Provide programming abstractions
    • Hide low level detail
    • Allow working on a strong model
    • Give weaker models automatically
models

Models

Distributed programming semantics

and failure patterns

processes
Processes
  • We have n distributed processes
  • All processes are directly linked
  • Synchronized world
  • In each round, each process:
    • Receive an external input value
    • Send a message to all processes
    • Receive all messages sent to it
    • Local computation and state change
slide7
PSR
  • Perfectly Synchronized Round-based model
  • Processes can only have atomic failures
    • They are only allowed to crash/stop
    • They can only crash if they are not in the middle of sending out a message
crash
Crash
  • Processes can only have crash failures
    • They are only allowed to crash/stop
    • They can also crash in the middle of sending out a message

A message might be sent only to several other processes upon a crash

omission
Omission
  • Processes can have crash failures
  • Processes can have send-omission failures
    • They can send out a message to only a subset of processes in a given round
general
General
  • Processes can have crash failures
  • Processes can have general-omission failures
    • They can fail to send or receive a message to or from a subset of processes in a given round
failure models
Failure models
  • PSR(n,t)
  • Crash(n,t)
  • Omission(n,t)
  • General(n,t)

We’d like to write protocols for PSR and run them in weaker failure models

interactive consistency

Interactive Consistency

An agreement algorithm

interactive consistency1
Interactive Consistency
  • Synchronous world
  • We have n processors
  • Each has a private value
  • We want all of the “good” processors to know the vector of all values of all the “good” processors
  • Let’s assume that faulty processors can only lie about their own value (or omit messages)
ic algorithm 1 st step
IC Algorithm: 1st step

B

C

D

a

d

b

c

Each client sends “my value is p” message to all clients

ic algorithm 2 nd step
IC Algorithm: 2nd step

B, B(c), B(d)

C, C(b), C(d)

D, D(b), D(c)

a

d

b

c

Each client sends “x told my that y has the value of z; y told me that …”

ic algorithm i th step
IC Algorithm: ith step

B, B(c), B(d), B(c(d)), …

C, C(b), C(d), …

D, D(b), D(c)

a

d

b

c

Each client sends “x told my that y told me that z has the value of q; y told me that …”

ic algorithm and faults
IC Algorithm: and faults?
  • When a processor omits a message, we just assume NIL as his value
  • Example:
    • NIL(b(d))

“d said nothing about b’s value”

ic algorithm deciding
IC Algorithm: deciding
  • Looking at all the “rumors” that a knows about the private value of b
    • We choose the rumor value if a single one exists or NIL otherwise
  • If b is non-faulty, then we have B or NIL as its results
  • If b is faulty, then a and c will have the same value for it (single one or NIL result)
ic algorithm1
IC Algorithm
  • We need k+1 rounds for k faulty processes
  • We’re sending out a lot of messages
slide21
PSR
  • Synchronous model
    • We are not going to do anything with this
  • Performance
    • Automatically transforming a protocol from PSR to a weaker model is costly
    • We are going to deal only with time
slide22
Why?
  • IC costs t+1 rounds
    • PSR of K rounds costs K(t+1) rounds
  • Optimizations of IC can do 2 rounds for failure-free runs
    • Now we get to K rounds in 2K+f rounds for actual f failures
  • We would like to get K+C rounds
the algorithm
The algorithm
  • If a process realizes it is faulty in any way – it simulates a crash in PSR
  • We run IC algorithms in parallel, starting one in each round for each PSR round
  • There can be several IC algorithms running in parallel at the same time
  • Each process runs the algorithm of all processes to reconstruct the failure patterns
the algorithm1
The algorithm

for phase rdo

input:= receiveInput()

start IC instance r with input

execute one round for all pending IC instances

for each decided IC do

update states, decision vector and failures list

modify received message by faulty statuses

simulate state transition for all processes

knowledge algorithm
Knowledge algorithm
  • Each process sends only its input value
  • The protocol executed on all other processes is known to him
  • He can execute the protocols of other processes by knowing their input values only
extension
Extension
  • No knowledge of other processes’ protocols
  • We now send out both the input and the message we would normally send out
  • This is done before we really know our own state  we are running several rounds in parallel
one small problem
One small problem…
  • Since we don’t know our state, how can we continue to the next round?
  • We send out extended set of states
    • All of the states we might come across in our next few rounds of computation
    • Compute the future in all of them and optimize as we get more messages
state of the process
State of the process
  • Until now, the input values did not depend on the state of the process
  • For a finite set of inputs, we can again use the same technique for an extended set of inputs
performance

Performance

Not real…

number of rounds
Number of rounds
  • We need K+f phases
  • Result for the first IC round takes f+2 phases
  • All of our rounds are at a 1-phase interval
size of messages
Size of messages
  • For the simple algorithm suggested:
    • nlog2|Input| per process, per round, per IC
    • nlog2|Input| per process, per phase

 - the number of phases needed to decide an IC

size of messages1
Size of messages
  • For the extended transformation:
    • 2n possible states in a phase
    • A coded state takes =2log2|State|+(n+1)log2|Message|
    • Message size is n2n
  • Long…
summary
Summary
  • We showed how to translate PSR into 3 different weaker models
  • We can try doing the same for the Byzantine model
ad