Fighting byzantine adversaries in networks network error correcting codes
Sponsored Links
This presentation is the property of its rightful owner.
1 / 73

Fighting Byzantine Adversaries in Networks: Network Error-Correcting Codes PowerPoint PPT Presentation


  • 85 Views
  • Uploaded on
  • Presentation posted in: General

Fighting Byzantine Adversaries in Networks: Network Error-Correcting Codes. Sidharth Jaggi. Michelle Effros Michael Langberg Tracey Ho. Sachin Katti Muriel Médard Dina Katabi. Obligatory Example/History. s. [ACLY00]. [ACLY00] Characterization Non-constructive. b 1. b 2. E V

Download Presentation

Fighting Byzantine Adversaries in Networks: Network Error-Correcting Codes

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Fighting Byzantine Adversaries in Networks: Network Error-Correcting Codes

Sidharth Jaggi

Michelle Effros

Michael Langberg

Tracey Ho

Sachin Katti

Muriel Médard

Dina Katabi


Obligatory Example/History

s

[ACLY00]

[ACLY00] Characterization Non-constructive

b1

b2

E

V

E

R

B

E

T

T

E

R

C=2

[LYC03], [KM02] Constructive (linear) Exp-time design

b1

b2

[JCJ03], [SET03] Poly-time design Centralized design

b1

b1

b2

[HKMKE03], [JCJ03]Decentralized design

b1+b2

.

.

.

b1

b1

b1+b2

b1+b2

Tons of work

t1

t2

[This work]All the above, plus security

(b1,b2)

b1

(b1,b2)

[SET03] Gap provably exists


Multicast

Network Model

ALL of Alice’s

information

decodable

EXACTLY

by

EACH Bob

Wireless

Wired

Network = Hypergraph

Simplifying assumptions

  • All links unit capacity

    • (1 packet/transmission)

[GDPHE04],[LME04] – No intereference

  • Acyclic network


Multicast Networks

Webcasting

P2P networks

Sensor networks


Multicast Network Model

2

ALL of Alice’s

information

decodable

EXACTLY

by

EACH Bob

2

3

Upper bound for multicast capacity C,

C ≤ min{Ci}

[ACLY00] With mixing, C = min{Ci} achievable!

[LCY02],[KM01],[JCJ03],[HKMKE03] Simple (linear) distributed codes suffice!


Mixing

F(2m)-linear network

[KM01]

b1

b2

bm

Source:- Group together m bits,

Every node:- Perform linear combinations

over finite field F(2m)

X1

β1

X2

β2

Generalization: The X are

length n vectors over F(2m)

βk

Xk


Problem!

Corrupted

links

Eavesdropped

links

Attacked

links


Setup

Eureka

Who

knows

what

Stage

  • Scheme A B C

  • Network C

  • Message A C

  • Code C

  • Bad links C

  • Coin A

  • Transmit B C

  • Decode B

Eavesdropped

links ZI

Attacked

links ZO

Privacy


Result(s)

First codes

  • Optimal rates (C-2ZO,C-ZO)

  • Poly-time

  • Distributed

  • Unknown topology

  • End-to-end

  • Rateless

  • Information theoretically secure

  • Information theoretically private

  • Wired/wireless

    [HLKMEK04],[JLHE05],[CY06],[CJL06],[GP06]


Error Correcting Codes

T

Y

X

Y=TX+E

Generator

matrix

Low-weight

vector

(Reed-Solomon Code)

E


Error Correcting Codes

T

Y

X

Y=TX+E

=TX+TZZ

TZ

Network

transform

matrices

Low-weight

vector

Unknown

Z


When stuck…

  • Useful abstraction/

  • building block

“ε-rate secret uncorrupted channels”

  • Existing model

  • ([GP06],[CJL06])

  • We improve!


Example

6 secret hashes of X

C=3

n-length vectors

scalars

non-linear

ZO=1

4n known

3n known

4n+6 known

4n unknown

4n+6 unknown

R = C - Zo

X3=X1+X2

Redundancy added

at source

2 3 1

Solve for


Example

6 secret hashes of X

C=3

Invertible with high probability

ZO=1

4n+6 known

4n+6 unknown

X3=X1+X2

Z=(0 z(2) z(3)… z(n))


Thm 1,Proof

T

R = C - Zo

Theorem 1: Rate C-ZO-ε achievable with ZI={E},

ε-rate secret uncorrupted channel

Improves on [GP06/Avalanche] (Decentralized)

and [CJL06] (optimal)

packets

CxC identity

matrix

n>>C

[HKMKE03]


Thm 1,Proof

T

Theorem 1: Rate C-ZO-ε achievable with ZI={E},

ε-rate secret uncorrupted channel

TZ

CxC matrix

Invertible w.h.p.


Thm 2

Theorem 2: Rate C-2ZO-ε achievable with ZI={E}


Example revisited

nZO

nZO

R = C – Zo - redundancy

R = C – Zo

R = C – 2Zo

X3=X1+X2

2 3 1

1 3 1 1

Z=(0 z(2) z(3)… z(n))

Z=(0 0 0… 0)

n more constraints added on X

Tight (ECC, [CY06])

DX=0


Thm 2,“Proof”

R = C - 2Zo

Theorem 2: Rate C-2ZO-ε achievable with ZI={E}

nZO extra constraints

D chosen uniformly at random,

known to Alice, Bob and Calvin


Thm 2,“Proof”

Theorem 2: Rate C-2ZO-ε achievable with ZI={E}

non-linear

linear

May not

be

Basis change

Invertible

T’’

?

D of appropriate

dimensions crucial

Disjoint


Thm 3,Proof

Theorem 3: Rate C-ZO-ε achievable, with ZI+2ZO<C

Theorem 4, etc:

ZI<C-2ZO

ZI<R

Algorithm 2 rate

Information-theoretic Privacy

Eavesdropping rate

Using algorithm 2 for

small header, can transmit

secret, correct information…

… which can be used for

algorithm 1 decoding!


Summary

Optimal rates

Poly-time

Distributed

Unknown topology

End-to-end

Rateless

Information theoretically secure/private

Wired/wireless


Backup slides


Network Coding “Justification”

R. Ahlswede, N. Cai, S.-Y. R. Li and R. W. Yeung,

"Network information flow," IEEE Trans. on Information Theory, vol. 46, pp. 1204-1216, 2000.

  • http://tesla.csl.uiuc.edu/~koetter/NWC/Bibliography.html

    ≈ 200 papers in 3 years

  • NetCod Workshops, DIMACS working group, ISIT 2005 - 4+ sessions, tutorials, …

  • Several patents, theses…


But what IS Network Coding?

“The core notion of network coding is to allow and encourage mixing of data at intermediate network nodes.”

(Network Coding homepage)


Point-to-point flows

Min-cut Max-flow (Menger’s) Theorem [M27]

Ford-Fulkerson Algorithm [FF62]

C

t

s


Multicasting

Webcasting

t1

t2

s1

Network

P2P networks

s|S|

t|T|

Sensor networks


Justifications revisited - I

s

Throughput

b1

b2

b1

b2

b1

b1+b2

b1

?

b2

b1

b1

b1+b2

b1+b2

t1

t2

[ACLY00]

(b1,b2)

b1

(b1,b2)


Gap Without Coding

s

[JSCEEJT05]

. . .

. . .

Coding capacity = h

Routing capacity≤2


Multicasting

Upper bound for multicast capacity C,

C ≤ min{Ci}

t1

[ACLY00] - achievable!

C1

[LYC02] - linear codes suffice!!

C2

t2

[KM01] - “finite field” linear codes suffice!!!

s

Network

C|T|

t|T|


Multicasting

F(2m)-linear network

[KM01]

b1

b2

bm

Source:- Group together `m’ bits,

Every node:- Perform linear combinations

over finite field F(2m)

β1

β2

βk


Multicasting

Upper bound for multicast capacity C,

C ≤ min{Ci}

t1

[ACLY00] - achievable!

C1

[LYC02] - linear codes suffice!!

C2

t2

[KM01] - “finite field” linear codes suffice!!!

s

Network

[JCJ03],[SET03] - polynomial time code design!!!!

C|T|

t|T|


Thms: Deterministic Codes

For m ≥ log(|T|), exists an F(2m)-linear network which can be designed in O(|E||T|C(C+|T|)) time.

[JCJ03],[SET03]

[JCJ03],[LL03]

Exist networks for which minimum m≈0.5(log(|T|))


Justifications revisited - II

s

Robustness/Distributed

design

One link breaks

t1

t2


Justifications revisited - II

s

Robustness/Distributed

design

b1

b2

b1

b2

b1+b2

b1+2b2

b1

b2

(Finite field arithmetic)

b1+b2

b1+b2

b1+2b2

t1

t2

(b1,b2)

(b1,b2)


Thm: Random Robust Codes

t1

C1

C = min{Ci}

C2

t2

Original Network

s

C|T|

t|T|


Thm: Random Robust Codes

t1

C1'

C' = min{Ci'}

C2'

t2

Faulty Network

s

If value of C' known to s,

same code can achieve C' rate!

(interior nodes oblivious)

C|T|'

t|T|


Thm: Random Robust Codes

m sufficiently large, rate R<C

Choose random [ß] at each node

b1

b2

bm

’

b’1

b’2

b’m

Probability over [ß] that

code works

>1-|E||T|2-m(C-R)+|V|

’’

[JCJ03] [HKMKE03]

(different notions of linearity)

b’’1

b’’2

b’’m

Much “sparser” linear operations

(O(m) instead of O(m2)) [JCE06]

Decentralized design

Vs. prob of error - necessary evil?


Zero-error Decentralized Codes

No a priori network topological

information available - information

can only be percolated down links

Desired - zero-error code design

One additional resource - each

node vi has a unique ID number i

(GPS coordinates/IP address/…)

Need to use yet other types of linear codes

[JHE06?]


Inter-relationships between notions of linearity

M Multicast

G General

Global

Local I/O ≠

Local I/O =

a Acyclic

A Algebraic

B Block

C Convolutional

Does not exist

Є epsilon rate loss

[JEHM04]

B

G

G

G

Є

G

M

a

G

M

M

a

a

G

?

M

a

A

C

M

a

M

a

G


Justifications revisited - III

s

Security

Evil adversary hiding in network

eavesdropping,

injecting false information

[JLHE05],[JLHKM06?]

t1

t2


Greater throughput

Robust against random errors

.

.

.

Aha!

Network Coding!!!


?

?

?


?

?

?

Yvonne1

.

.

.

?

?

?

Xavier

Yvonne|T|

Zorba


Setup

Eureka

Who

knows

what

Stage

  • Scheme X Y Z

  • Network Z

  • Message X Z

  • Code Z

  • Bad links Z

  • Coin X

  • Transmit Y Z

  • Decode Y

Wired

Wireless (packet losses, fading)

Eavesdropped

links ZI

Attacked

links ZO


Setup

?

C

?

?

Yvonne1

?

?

?

MO

Xavier

Yvonne|T|

Zorba

Xavier and Yvonnes share no resources (private key, randomness)

Distributed design (interior nodes oblivious/overlay to network coding)

Zorba (hidden) knows network; Xavier and Yvonnes don’t

Zorba sees MI links ZI, controls MO links ZO pI=MI/C, pO=MO/C

Zorba computationally unbounded; Xavier and Yvonnes -- “simple” computations

Zorba knows protocols and already knows almost all of Xavier’s message

(except Xavier’s private coin tosses)

Goal: Transmit at “high” rate and w.h.p. decode correctly


Background

  • Noisy channel models (Shannon,…)

    • Binary Symmetric Channel

1

C (Capacity)

H(p)

0

1

0

0.5

1

p (“Noise parameter”)


Background

  • Noisy channel models (Shannon,…)

    • Binary Symmetric Channel

    • Binary Erasure Channel

1

C (Capacity)

1-p

0

E

0

0.5

1

p (“Noise parameter”)


Background

  • Adversarial channel models

    • “Limited-flip” adversary, pI=1 (Hamming,Gilbert-Varshanov,McEliece et al…)

      • Large alphabets (Fq instead of F2)

    • Shared randomness, cryptographic assumptions…

1

C (Capacity)

0

1

0

0.5

1

pO (“Noise parameter”)


Upper bounds

1

C (Capacity)

0.5

0

1

0.5

pO (“Noise parameter”)

1-pO


Upper bounds

1

?

C (Capacity)

?

?

0.5

0

1

0.5

pO (“Noise parameter”)

0


Unicast – Results [JLHE05]

1

C (Capacity)

0.5

0

1

0.5

pI=pO (“Noise parameter” = “Knowledge parameter”)


Full knowledge [Folklore]

1

C (Capacity)

0

1

0.5

pO (“Noise parameter”)

(“Knowledge parameter” pI=1)


Multicast Networks [HKMKE03]

Rate h=C-MO

xb(i)

Block

t1

y1

xs(j)

hxh identity

matrix

h<<n

Slice

S

x

xb(1)

β1

T

xb(i)

x’b(i)

βi

ys(j)=Txs(j)

t|T|

y|T|

xs(j)=T-1ys(j)

βh

xb(h)


Multicast Networks

Observation 1: Can treat

adversaries as new sources

1

C (Normalized by h)

R1

S’1

0.5

S

S’2

0

1

0.5

R|T|

pO

S’|Z|


Multicast Networks

y’s(j)=Txs(j)+T’x’s(j)

Supersource

SS

Observation 2: w.h.p. over network

code design, {TxS(j)} and {T’x’S(j)}

do not intersect (robust codes…).

Corrupted

Unknown


Multicast Networks

y’s(j)=Txs(j)+T’x’s(j)

ε redundancy

xs(2)+xs(5)-xs(3)=0

xs(3)+2xs(9)-5xs(1)=0

ys(3)+2ys(9)-5ys(1)=

another vector in {T’x’s(j)}

ys(2)+ys(5)-ys(3)=

vector in {T’x’s(j)}

{Txs(j)}

{T’x’s(j)}


when you have eliminated the impossible, whatever remains, however improbable, must be the truth

Multicast Networks

y’s(j)=Txs(j)+T’x’s(j)

ε redundancy

Repeat MO times

Discover {T’x’s(j)}

“Zero out” {T’x’s(j)}

Estimate T (redundant

xs(j) known)

Decode

Linear algebra

{Txs(j)}

{T’x’s(j)}


Multicast Networks

y’s(j)=Txs(j)+T’x’s(j)

xs(2)+xs(5)-xs(3)=0

ys(2)+ys(5)-ys(3)=

vector in {T’x’s(j)}

x’s(2)+x’s(5)-x’s(3)=0

ys(2)+ys(5)-ys(3)=0


Scheme 1(a)

“ε-rate secret uncorrupted channels”

Useful abstraction


Scheme 1(b)

“sub-header based scheme”

… kind of…

Works

… for “many” networks


Scheme 2

“distributed network error-correcting code”

(Knowledge parameter pI=1)

[CY06] – bounds, high complexity construction

[JHLMK06?] – tight, poly-time construction

1

C (Capacity)

0

1

0.5

pO (“Noise parameter”)


Scheme 2

“distributed network error-correcting code”

error vector

y’s(j)=Txs(j)+T’x’s(j)

1-2pO

pO

pO


Scheme 2

“distributed network error-correcting code”

y’s(j)=Txs(j)+T’x’s(j)


Scheme 2

“distributed network error-correcting code”

y’s(j)=T’’xs(j)+T’x’s(j)

e

e’

e


Scheme 2

Linear algebra

“distributed network error-correcting code”

y’s(j)=T’’xs(j)+T’x’s(j)

e

e’

e


Scheme 3

“non-omniscient adversary”

y’s(j)=T’’xs(j)+T’x’s(j)

MI+2MO<C

MI<C-2MO

Scheme 2 rate

Zorba’s observations

Using Scheme 2 as

small header, can transmit

secret, correct information…

… which can be used for

Scheme 1(a) decoding!


Variations - Feedback

1

C

0

1

p


Variations – Know thy enemy

1

1

C

C

0

0

1

1

p

p


Variations – Random Noise

S

E

P

A

R

A

T

I

O

N

CN

C

0

1

p


  • Login