An introduction to network coding l.jpg
This presentation is the property of its rightful owner.
Sponsored Links
1 / 119

An Introduction to Network Coding PowerPoint PPT Presentation


  • 216 Views
  • Uploaded on
  • Presentation posted in: General

An Introduction to Network Coding. Muriel Médard Associate Professor EECS Massachusetts Institute of Technology Ralf Koetter Director Institute for Communications Engineering Technical University of Munich. Outline of course. An introduction to network coding: Network model

Download Presentation

An Introduction to Network Coding

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


An introduction to network coding l.jpg

An Introduction to Network Coding

Muriel Médard

Associate Professor

EECS

Massachusetts Institute of Technology

Ralf Koetter

Director

Institute for Communications Engineering

Technical University of Munich


Outline of course l.jpg

Outline of course

  • An introduction to network coding:

    • Network model

    • Algebraic aspects

    • Delay issues

  • Network coding for wireless multicast:

    • Distributed randomized coding

    • Erasure reliability

    • Use of feedback

    • Optimization in choice of subgraphs

    • Distributed optimization

    • Dealing with mobility

    • Relation to compression

  • Network coding in non-multicast:

    • Algorithms

    • Heuristics

    • Network coding for delay reduction in wireless downloading

  • Security with network coding:

    • Byzantine security

    • Wiretapping aspects


Network coding l.jpg

Network coding

  • Canonical example [Ahslwede et al. 00]

  • What choices can we make?

  • No longer distinct flows, but information

s

b

b

1

2

b

b

t

u

1

2

w

b

b

1

2

x

y

z


Network coding4 l.jpg

Network coding

  • Picking a single bit does not work

  • Time sharing does not work

  • No longer distinct flows, but information

s

b

b

1

2

b

b

t

u

1

2

w

b

b

1

2

b

1

x

y

z

b

b

1

1


Network coding5 l.jpg

Network coding

  • Need to use algebraic nature of data

  • No longer distinct flows, but information

s

b

b

1

2

b

b

t

u

1

2

w

b

b

1

2

b

+

b

1

2

x

y

z

b

+

b

b

+

b

1

2

1

2


Slide6 l.jpg

[KM01, 02, 03]


A simple example l.jpg

A simple example


A simple example13 l.jpg

A simple example


Transfer matrix l.jpg

Transfer matrix


Linear network system l.jpg

Linear network system


Solutions l.jpg

Solutions


Multicast l.jpg

Multicast


Multicast20 l.jpg

Multicast


One source disjoint multicasts l.jpg

One source, disjoint multicasts


Delays l.jpg

Delays


Delays23 l.jpg

Delays


Delays24 l.jpg

Delays


Network coding for multicast l.jpg

Network coding for multicast:

  • Distributed randomized coding

  • Erasure reliability

  • Use of feedback

  • Optimization in choice of subgraphs

  • Distributed optimization

  • Dealing with mobility


Randomized network coding l.jpg

Randomized network coding

  • The effect of the network is that of a transfer matrix from sources to receivers

  • To recover symbols at the receivers, we require sufficient degrees of freedom – an invertible matrix in the coefficients of all nodes

  • The realization of the determinant of the matrix will be non-zero with high probability if the coefficients are chosen independently and randomly

  • Probability of success over field F ≈

  • Randomized network coding can use any multicast subgraph which satisfies min-cut max-flow bound for each receiver [HKMKE03, HMSEK03, WCJ03] for any number of sources, even when correlated [HMEK04]

Endogenous inputs

j

Exogenous input


Erasure reliability l.jpg

Erasure reliability

  • Packet losses in networks result from

    • congestion,

    • buffer overflows,

    • (in wireless) outage due to fading or change in topology

  • Prevailing approach for reliability: Request retransmission

  • Not suitable for

    • high-loss environments,

    • multicast,

    • real-time applications.


Erasure reliability28 l.jpg

Erasure reliability

  • Alternative approach: Forward Error Correction (FEC)

    • Multiple description codes

    • Erasure-correcting codes (e.g. Reed-Solomon, Tornado, LT, Raptor)

  • End-to-end: Connection as a whole is viewed as a single channel; coding is performed only at the source node.


Erasure reliability single flow l.jpg

Erasure reliability – single flow

  • End-to-end erasure coding: Capacity is packets per unit time.

  • As two separate channels: Capacity is packets per unit time.

  • -Can use block erasure coding on each channel. But delay is a problem.

  • Network coding: minimum cut is capacity

  • - For erasures, correlated or not, we can in the multicast case deal with average flows uniquely [Lun et al. 04, 05], [Dana et al. 04]:

    • - Nodes store received packets in memory

    • Random linear combinations of memory contents sent out

    • Delay expressions generalize Jackson networks to the innovative packets

    • Can be used in a rateless fashion


Feedback for reliability l.jpg

Feedback for reliability

  • Parameters we consider:

  • delay incurred at B: excess time, relative to

  • the theoretical minimum, that it takes for k packets

  • to be communicated, disregarding any delay due to

  • the use of the feedback channel

  • block size

  • feedback: number of feedback packets used

  • (feedback rate Rf = number of feedback messages / number of received packets)

  • memory requirement at B

  • achievable rate from A to C


Feedback for reliability31 l.jpg

Feedback for reliability

Follow the approach of Pakzad et al. 05, Lun et al. 06

Scheme V allows us to achieve the

min-cut rate, while keeping the average memory

requirements at node B finite

note that the feedback delay for Scheme V is

smaller than the usual ARQ (withRf= 1) by a

factor of Rf

feedback is required only on link BC

[Fragouli et al. 07]


Erasure reliability32 l.jpg

Erasure reliability

  • For erasures, correlated or not, we can in the multicast case deal with average flows uniquely [LME04], [LMK05], [DGPHE04]

  • We consider a scheme [LME04] where

    • nodes store received packets in memory;

    • random linear combinations of memory contents sent out at every transmission opportunity (without waiting for full block).

  • Scheme gets to capacity under arbitrary coding at every node for

    • unicast and multicast connections

    • networks with point-to-point and broadcast links.


Scheme for erasure reliability l.jpg

Scheme for erasure reliability

  • We have k message packets w1, w2, . . . , wk(fixed-length vectors over Fq) at the source.

  • (Uniformly-)random linear combinations of w1, w2, . . . , wkinjected into source’s memory according process with rate R0.

  • At every node, (uniformly-)random linear combinations of memory contents sent out;

    • received packets stored into memory.

    • in every packet, store length-kvector over Fqrepresenting the transformation it is of w1, w2, . . . , wk— global encoding vector.


Coding scheme l.jpg

Coding scheme

  • Since all coding is linear, can write any packet xas a linear combination

    of w1, w2, . . . , wk:

  • The vector γis the global encoding vector of x.

  • We send the global encoding vector along with x, in its header, incurring a constant overhead.

  • The side information provided by γis very important to the functioning of the scheme.


Outline of proof l.jpg

Outline of proof

  • Keep track of the propagation of innovative packets - packets whose auxiliary encoding vectors (transformation with respect to the n packets injected into the source’s memory) are linearly independent across particular cuts.

  • Can show that, if R0 less than capacity and input process is Poisson, then propagation of innovative packets through any node forms a stable M/M/1 queueing system in steady-state.

  • So, Ni, the number of innovative packets in the network is a time-invariant random variable with finite mean.

  • We obtain delay expressions using in effect a generalization of Jackson networks for the innovative packets


Comments for erasure reliability l.jpg

Comments for erasure reliability

  • Particularly suitable for

    • overlay networks using UDP, and

    • wireless packet networks (have erasures and can perform coding at all nodes).

  • Code construction is completely decentralized.

  • Scheme can be operated ratelessly - can be run indefinitely until successful reception.


Coding for packet losses unicast l.jpg

Coding for packet losses - unicast

Average number of transmissions required per packet in random networks of varying size. Sources and sinks were chosen randomly according to a uniform distribution. Paths or subgraphs were chosen in each random instance to minimize the total number of transmissions required, except in the cases of end-to-end retransmission and end-to-end coding, where they were chosen to minimize the number of transmissions required by the source node. [Lun et al. 04]


Explicit feedback main idea l.jpg

Explicit Feedback - Main Idea

V1

V2

V

V3

Vn

  • Store linear combinations of original packets

  • No need to store information commonly known at all receivers (i.e. V∆)

Vector spaces representing knowledge

V: Knowledge of sender

Vj:Knowledge of receiver j

V∆: Common knowledge of all receivers

(Sundararajan et al 07)


Slide39 l.jpg

Algorithm Outline

Separate out common knowledge

Incorporate channel state feedback

VΔ’(t)‏

Incorporate arrivals of slot (t-1)‏

V(t)‏

Vj(t)‏

VΔ (t)‏

V(t-1)

Vj'(t)‏

V(t-1)‏

Vj(t-1)‏

VΔ (t-1)‏

U’’(t)

Uj’’(t)‏

VΔ’(t)‏

Slot (t-1)‏

Slot (t+1)‏

Slot t

V(t-1): Knowledge of the sender after incorporating slot (t-1) arrivals;

Vj(t-1): Knowledge of receiver j at the end of slot (t-1);


Slide40 l.jpg

Algorithm Outline

Separate out common knowledge

Incorporate channel state feedback

VΔ’(t)‏

Incorporate arrivals of slot (t-1)‏

V(t)‏

Vj(t)‏

VΔ (t)‏

V(t-1)

Vj'(t)‏

V(t-1)‏

Vj(t-1)‏

VΔ (t-1)‏

U’’(t)

Uj’’(t)‏

VΔ’(t)‏

Slot (t-1)‏

Slot (t+1)‏

Slot t

Vj’(t): Knowledge of receiver j after incorporating feedback;


Slide41 l.jpg

Algorithm Outline

Separate out common knowledge

Incorporate channel state feedback

VΔ’(t)‏

Incorporate arrivals of slot (t-1)‏

V(t)‏

Vj(t)‏

VΔ (t)‏

V(t-1)

Vj'(t)‏

V(t-1)‏

Vj(t-1)‏

VΔ (t-1)‏

U’’(t)

Uj’’(t)‏

VΔ’(t)‏

Slot (t-1)‏

Slot (t+1)‏

Slot t

Remaining part of information:


Slide42 l.jpg

Algorithm Outline

Separate out common knowledge

Incorporate channel state feedback

VΔ’(t)‏

Incorporate arrivals of slot (t-1)‏

V(t)‏

Vj(t)‏

VΔ (t)‏

V(t-1)

Vj'(t)‏

V(t-1)‏

Vj(t-1)‏

VΔ (t-1)‏

U’’(t)

Uj’’(t)‏

VΔ’(t)‏

Slot (t-1)‏

Slot (t+1)‏

Slot t

We can ensure that for all j:

Therefore, it is sufficient to store linear combinations corresponding to some basis of U’’(t)


Incremental version of the vector spaces l.jpg

Separate out common knowledge

Incorporate channel state feedback

UΔ’(t)‏

Incorporate arrivals of slot (t-1)‏

U(t)‏

Uj(t)‏

U(t-1)

Uj'(t)‏

U(t-1)‏

Uj(t-1)‏

U’’(t)

Uj’’(t)‏

UΔ’(t)‏

Slot (t-1)‏

Slot (t+1)‏

Slot t

Incremental version of the vector spaces

  • All the operations can be performed even after excluding the common knowledge from all the vector spaces, since it is not relevant any more. Define U(t) and Uj(t) such that


Incremental version of the vector spaces44 l.jpg

Separate out common knowledge

Incorporate channel state feedback

UΔ’(t)‏

Incorporate arrivals of slot (t-1)‏

U(t)‏

Uj(t)‏

U(t-1)

Uj'(t)‏

U(t-1)‏

Uj(t-1)‏

U’’(t)

Uj’’(t)‏

UΔ’(t)‏

Slot (t-1)‏

Slot (t+1)‏

Slot t

Incremental version of the vector spaces

  • Let Uj’(t) be the incremental knowledge of receiver j after including feedback

  • Then the incremental common knowledge is :and we get


Slide45 l.jpg

The Algorithm

Separate out common knowledge

Incorporate channel state feedback

UΔ’(t)‏

Incorporate arrivals of slot (t-1)‏

U(t-1)

Uj'(t)‏

U(t-1)‏

Uj(t-1)‏

U’’(t)

Uj’’(t)‏

UΔ’(t)‏

Slot t

In time slot t,

  • Compute U(t-1) after including slot (t-1) arrivals into U’’(t-1)

  • Using the feedback, compute Uj’(t)’s and hence U∆’(t)

  • Compute a basis B∆ for U∆ ’(t).

  • Extend this basis to a basis B for U(t-1).


Slide46 l.jpg

Separate out common knowledge

Incorporate channel state feedback

UΔ’(t)‏

Incorporate arrivals of slot (t-1)‏

U(t-1)

Uj'(t)‏

U(t-1)‏

Uj(t-1)‏

U’’(t)

Uj’’(t)‏

UΔ’(t)‏

Slot t

The Algorithm

  • Replace current queue contents with linear combinations of packets whose coefficient vectors are those in B\B∆

  • Express all the vector spaces in terms of the new basis B\B∆. This basis spans U’’(t)

  • Compute a linear combination g which is in U’’(t) but not in any of the Uj’’(t)’s (except if Uj’’(t)=U’’(t)). (This is possible iff the field size exceeds no. of receivers)


Bounding the queue size l.jpg

No. of arrivals in slot t

Bounding the queue size

  • Q(t): Physical queue size at the end of slot t

  • Can be proved using following property of subspaces

    where X+Y is span(X U Y).


Bounding the queue size48 l.jpg

No. of arrivals in slot t

Bounding the queue size

  • Q(t): Physical queue size at the end of slot t

  • LHS : Physical queue size (the amount of the sender's knowledge that is not known at all receivers)

  • RHS : Virtual queue size (sum of backlogs in linear degrees of freedom of all the receivers)


Summary uncoded vs coded l.jpg

Uncoded Networks

Coded Networks

Knowledge represented by

Set of received packets

Vector space spanned by the coefficient vectors of the received linear combinations

Amount of knowledge

Number of packets received

Number of linearly independent (innovative) linear combinations of packets received (i.e., dimension of the vector space)‏

Queue stores

All undelivered packets

Linear combination of packets which form a basis for the coset space of the common knowledge at all receivers

Update rule after each transmission

If a packet has been received by all receivers – drop it.

Recompute the common knowledge space V∆; Compute a new set of linear combinations so that their span is independent of V∆.

Summary: Uncoded vs. Coded


Complexity and overhead l.jpg

Complexity and Overhead

  • All computations can be performed on incremental versions of the vector spaces (the common knowledge can be excluded everywhere) – hence complexity tracks the queue size

  • Overhead for the coding coefficients depends on how many uncoded packets get coded together. This may not track the queue size

  • Overhead can be bounded by bounding the busy period of the virtual queues

  • Question: given this use of coding for queueing, how do we manage the network resources to take advantage of network coding?


Optimizing resource use l.jpg

Optimizing resource use

  • Enables network to operate with nodes behaving as honest middlemen negotiating with their direct business contacts

  • Example: Distributed Bellman-Ford

  • Can we use network coding to do distributed optimization?

(1,1,0)

Without network coding:

Integrality constraint

arises in time-sharing

between the blue

and red trees –

optimization

is NP-complete

b

1

b

1

b

+

b

(1,1,1)

1

2

(1,0,1)

(1,1,0)

b

(1,1,1)

1

b

+

b

2

1

(1,0,1)

b

(1,1,0)

(1,1,1)

2

b

+

b

1

2

b

(1,0,1)

2

b

2


Relaxing the constraints on trees l.jpg

Relaxing the constraints on trees

(1,1,0)

(1,1,1)

(1,0,1)

(1,1,0)

(1,1,1)

(1,0,1)

(1,1,1)

(1,1,0)

(1,0,1)

source

=

sink

Index on receivers

rather than on processes [Lunetal04]


Wireless case l.jpg

Wireless case

  • Wireless systems have a multicast advantage

  • Omnidirectional antennas: i → k implies i → j “for free”

  • Same distributed approach holds

k

i

j


Optimization l.jpg

D

Convex: second derivative

is positive

Capacity Limit

Optimization

  • For any convex cost functions

  • The vector z is part of a feasible solution for the optimization problem if and only if there exists a network code that sets up a multicast connection in the graph G at average rate arbitrarily close to R from source s to terminals in the set T and that puts a flow arbitrarily close to zijon each link (i, j)

  • Proof follows from min-cut max-flow necessary and sufficient conditions

  • Polynomial-time

  • Can be solved in a distributed way [Lunetal05]

  • Steiner-tree problem can be seen to be this problem with extra integrality constraints


Distributed approach l.jpg

Distributed approach

  • Consider the problem

  • We have that is the bounded polyhedron of points x (t) satisfying the conservation of flow constraints and capacity constraints

    [LRKMAL05]


Distributed approach56 l.jpg

Distributed approach

  • Consider a subgradient approach

  • Start with an iterate p[0] in the feasible set

  • Solve subproblem in previous slide for each tin T

  • We obtain a new updated price

  • Use projection arguments to relate new price to old

  • Use duality to recover coded flows from price


Distributed approach57 l.jpg

Distributed approach


Recovering the primal l.jpg

Recovering the primal

  • Problem of recovering primal from approximation of dual

  • Use approach of [SC96] for obtaining primal from subgradient approximation to dual

  • The conditions can be coalesced into a single algorithm to iterate in a distributed fashion towards the correct cost

  • There is inherent robustness to change of costs, as in classical distributed Bellman-Ford approach to routing


Extensions l.jpg

Extensions

  • Can be extended to any strictly convex cost

  • Primal-dual optimization

  • Asynchronous, continuous-time algorithm

  • Question: how many messages need to be exchanged for costs to converge?


Illustration of a distributed approach l.jpg

Illustration of a distributed approach


Reaction to changes in cost topology l.jpg

Reaction to changes in cost/topology


Distributed operation energy benefits l.jpg

Distributed operation – energy benefits


Can we set up a simplified graph l.jpg

Can we set up a simplified graph?

  • CODECAST [Park et al. 06]

  • Every coded packet carries in the header three more fields, vldd, dist, and nust

  • The one-bit field vldd is set if either the sender is a multicast receiver or has received a previous block packet with vldd bit set from one of the sender’s downstream nodes

  • A node considers a neighboring node to be downstream if the neighboring node transmits a packet with a larger dist value than the dist value the node maintains

  • Each node maintains as a local variable dist, indicating the hop distance from the multicast data source and copies its value to every code packet the node transmits.

  • Every time a node transmits a coded packet, dist is recalculated as one plus the biggest dist value found in the headers of the packets which are combined to yield the coded packet

  • Conversely, a node considers a neighboring node to be a upstream node if the neighboring node transmits a coded packet in a new block or a smaller dist value

  • Each node also maintains nust, indicating the number of upstream nodes as a local variable and records its value in the header of every packet the node transmits

  • A node broadcasts to the neighborhood r coded packets

  • Compare with On Demand Multicast Routing Protocol (ODMRP) [Lee et al. 02]


Performance of codecast l.jpg

Performance of CODECAST


Compression using network coding l.jpg

Compression using network coding

  • We have heretofore assumed independent sources

  • In sensor applications, we cannot assume such independence

  • Can we make use of the correlation via network coding? [Hoetal06]

  • Joint source and network coding [Lee et al. 07]


Problem formulation l.jpg

ProblemFormulation

Given

where

Let


Problem formulation67 l.jpg

Problem Formulation

where

desired subgraph:

desired subgraph cost:


Joint vs separate l.jpg

Joint vs. Separate

sum of rates on virtual links > R joint solution

= R  separate solution

Joint (cost 9) Separate (cost 10.5)

for each link (R = 3)


Simulation random networks l.jpg

Simulation: Random Networks

Network model:

• n nodes randomly placed in h x w box

• nodes within distance r connected by edge

• s sources, t receivers

• all edges have unit weight, no capacity constraints

sources along top edge

receivers at bottom


Simulation results l.jpg

Simulation Results

2 sources

2 receivers

H(Xi) = 2

H(X1,X2) = R

benefit of joint coding increases as network widens, and

sources are more correlated.


Network coding in non multicast l.jpg

Network coding in non-multicast

  • Algorithms

  • Heuristics

  • Network coding for delay reduction in wireless downloading


Linearity may not suffice l.jpg

Linearity may not suffice


Network coding beyond multicast l.jpg

Network coding – beyond multicast

  • Traditional algorithms rely on routing, based on wireline systems

  • Wireless systems instead produce natural broadcast

  • Can we use incidental reception in an active manner, rather than treat it as useless or as interference

  • Complementary technique with respect to schemes such UWB and MIMO that make use of additional, albeit possibly plentiful, resources over links

  • Network coding seeks instead to use available energy and degrees of freedom fully over networks

a


Network coding beyond multicast77 l.jpg

Network coding – beyond multicast

  • Traditional algorithms rely on routing, based on wireline systems

  • Wireless systems instead produce natural broadcast

  • Can we use incidental reception in an active manner, rather than treat it as useless or as interference

  • Complementary technique with respect to schemes such UWB and MIMO that make use of additional, albeit possibly plentiful, resources over links

  • Network coding seeks instead to use available energy and degrees of freedom fully over networks

b

a


Network coding beyond multicast78 l.jpg

Network coding – beyond multicast

  • Traditional algorithms rely on routing, based on wireline systems

  • Wireless systems instead produce natural broadcast

  • Can we use incidental reception in an active manner, rather than treat it as useless or as interference

  • Complementary technique with respect to schemes such UWB and MIMO that make use of additional, albeit possibly plentiful, resources over links

  • Network coding seeks instead to use available energy and degrees of freedom fully over networks

a+b

a+b

a+b

b

a

b

a

Extend this idea [Katabi et al 05]


Opportunism 1 l.jpg

Opportunism (1)

Opportunistic Listening:

  • Every node listens to all packets

  • It stores all heard packets for a limited time

  • Node sends Reception Reports to tell its neighbors what packets it heard

    • Reports are annotations to packets

    • If no packets to send, periodically send reports


Opportunism 2 l.jpg

Opportunism (2)

Opportunistic Coding:

  • Each node uses only local information

  • Use your favorite routing protocol

  • To send packet p to neighbor A, XOR p with packets already known to A

    • Thus, A can decode

  • But how to benefit multiple neighbors from a single transmission?


Efficient coding l.jpg

Efficient coding

Arrows show next-hop

A

D

C

B


Efficient coding82 l.jpg

Efficient coding

Bad Coding

C will get red pkt but A can’t get blue pkt

Arrows show next-hop

A

D

C

B


Efficient coding83 l.jpg

Efficient coding

OK Coding

Both A & C get a packet

Arrows show next-hop

A

D

C

B


Efficient coding84 l.jpg

Efficient coding

Best Coding

A, B, and C, each gets a packet

Arrows show next-hop

A

D

C

B

To XOR n packets, each next-hop should have the n-1 packets encoded with the packet it wants


But how does a node know what packets a neighbor has l.jpg

But how does a node know what packets a neighbor has?

  • Reception Reports

  • But reception reports may get lost or arrive too late

  • Use Guessing

    • If I receive a packet I assume all nodes closer to sender have received it


Beyond fixed routes l.jpg

B heard S’s transmission

Beyond fixed routes

  • Piggyback on reception report to learn whether next-hop has the packet

  • cancel unnecessary transmissions

D

A

B

S

No need for A transmission

S transmits

Route Chosen by Routing Protocol

Opportunistic Routing [BM05]


Pseudo broadcast l.jpg

How to overhear

Pseudo Broadcast

  • Ideally, design a collision detection and back-off scheme for broadcast channels

  • In practice, we want a solution that works with off-the-shelf 802.11 drivers/cards

Piggyback on 802.11 unicast which has collision detection and backoff

  • Each XOR-ed packet is sent to the MAC address of one of the intended receivers

  • Put all cards in promiscuous mode

Our Solution:


Experiment l.jpg

Experiment

  • 40 nodes

  • 400mx400m

  • Senders and receivers are chosen randomly

  • Flows are duplex (e.g., ping)

  • Metric:

    Total Throughput of the Network


Current 802 11 l.jpg

Current 802.11

Net. Throughput (KB/s)

No Coding

1 2 4 6 8 10 12 14 16 18 20

Number of flows in experiment


Opportunistic listening coding l.jpg

Opportunistic Listening & Coding

Net. Throughput (KB/s)

Opportunistic Listening & Coding

No Coding

1 2 4 6 8 10 12 14 16 18 20

Number of flows in experiment


Add opportunistic routing l.jpg

Add Opportunistic Routing

Net. Throughput (KB/s)

With Opportunistic Routing

Opportunistic Listening & Coding

No Coding

1 2 4 6 8 10 12 14 16 18 20

Number of flows in experiment


Our scheme vs current 2 way flows l.jpg

Our Scheme vs. Current (2-way Flows)

Net. Throughput (KB/s)

Our Scheme

No Coding

1 2 4 6 8 10 12 14 16 18 20

Huge throughput improvement, particularly at high congestion

Number of flows in experiment


Completely random setting l.jpg

Completely random setting

  • 40 nodes

  • 400mx400m

  • Senders and receivers are chosen randomly

  • Flows are one way

  • Metric:

    Total Throughput of the Network


Slide94 l.jpg

Our Scheme vs. Current (1-way Flows)

Net. Throughput (KB/s)

Our Scheme

No Coding

A Unicast Network Coding Scheme that Works Well in realistic Situations

Number of flows in experiment


A principled optimization approach to match or outperform routing l.jpg

A principled optimization approach to match or outperform routing

  • An optimization that yields a solution that is no worse than multicommodity flow

  • The optimization is in effect a relaxation of multicommodity flow – akin to Steiner tree relaxation for the multicast case

  • A solution of the problem implies the existence of a network code to accommodate the arbitrary demands – the types of codes subsume routing

  • All decoding is performed at the receivers

  • We can provide an optimization, with a linear code construction, that is guaranteed to perform as well as routing [Lun et al. 04]


Optimization96 l.jpg

gives a set partition of {1, . . . ,M} that represents the sources that can

be mixed (combined linearly) on links going into j

Demands of

{1, . . . ,M} at t

Optimization

Optimization for arbitrary demands with decoding at receivers


Coding and optimization l.jpg

Coding and optimization

  • Sinks that receive a source process in Cby way of link (j, i) either receive all the source processes in C or none at all

  • Hence source processes in C can be mixed on link (j, i) as the sinks that receive the mixture will also receive the source processes (or mixtures thereof) necessary for decoding

  • We step through the nodes in topological order, examining the outgoing links and defining global coding vectors on them (akin to [Jaggi et al. 03])

  • We can build the code over an ever-expanding front

  • We can go to coding over time by considering several flows for the different times – we let the coding delay be arbitrarily large

  • The optimization and the coding are done separately as for the multicast case, but the coding is not distributed


Delay l.jpg

Delay

  • Network coding we have shown to provide gains in throughput and energy consumption.

  • However, its delay characteristics are not well understood.

  • Let us consider a simple scenario, where the delay properties of transmission with and without coding can be investigated.

  • Such gains can be best studied in a rateless transmission scenario such as file download [Eryilmaz et al. 06]


System model l.jpg

System model

Receiver-1

C1[t]

P1,K1

P1,1

C2[t]

Receiver-2

Base

Station

Pf,Kf

Pf,1

CN[t]

Receiver-N

PF,KF

PF,1

  • Each file is requested by a subset of the receivers.

  • ON/OFF:Ci [t]{0,1},i.i.d. over time slots and channels.


Slide100 l.jpg

Goal

  • Completion time: Number of slots required to complete the download of all the files to all the interested receivers.

  • Channel Side Information (CSI): denotes the availability ofC[t]=(C1[t],…,CN[t])before packet transmission decision.

  • Our goal isto understand the mean completion time behavior of optimal strategies with and without coding and in the presence and lack of CSI.


Broadcast scenario l.jpg

Broadcast scenario

Receiver-1

(biKFq )

C1 [t]

b1K

bmK

C2 [t]

Receiver-2

Base

Station

PK

P1

CN [t]

Single file (K packets)

Receiver-N

  • Each channel can accommodate a single packet in a time slot when it is ON.

  • Ci[t]is Bernoulli distributed with meanci.


Scheduling versus coding l.jpg

P3

P3

Coding,

q=2

Scheduling

P2

P2

P1

P1

Scheduling versus coding

  • Scheduling: In slot t, the base station is allowed to transmit packetP[t]  {P1,…,PK }.

  • Linear Coding: In slot t, the base station is allowed to transmit packetP[t]satisfying

whereak[t] Fqfor allk {1,…,K}.


Random coding with and without csi l.jpg

Result 1: The average completion time of the above asymptotically optimal coding strategy is given by

Random coding with and without CSI

  • Pick ak[t] uniformly at random from Fqfor each k and t.

  • Each receiver needs K linearly independent combinations of the packets, {P1, …, PK}.

  • On average, each channel must be ON times, which can be made arbitrarily close to K by choosing q large enough.


Optimal scheduling without csi l.jpg

Result 2:The average completion time of the RR Scheduleis given by

for some(0.5,1).

Optimal scheduling without CSI

  • We assume that the base station gets feedback from a receiver only when it receives the whole file.

  • For symmetric channel conditions, i.e. ci = cfor alli, then Round Robin (R) Schedulingis optimal.


Optimal scheduling with csi for n 2 l.jpg

s2

e

s1

Optimal scheduling with CSI for N=2

Base

Station

PK

PK

PK

PK-1

P2

P2

P1

P1

P1

P1

PK

P1

PK

P2


Counterexample for n 3 k 3 l.jpg

Counterexample for N=3, K=3

P3

P2

P3

P1

P3

P3

P2

P1

P2

P1

P2

P1

P2

P3

P1 + P2 + P3

P3

P1 + P2 + P3

P1

P1

P2

P1 + P2 + P3


Heuristic scheduling policy with csi l.jpg

Heuristic scheduling policy with CSI

  • The optimal policy is characterized using Dynamic Programming.

  • The size of the state space grows exponentially with N, K.

  • We study the following heuristic policy that has computational complexity comparable to the coding strategy.

  • Heuristic Broadcast Scheduling: In slot t, transmit the packet that is demanded by the largest number of receivers that are ON in that slot.


Coding vs scheduling for broadcast scenario l.jpg

Coding vs. scheduling for broadcast scenario


Slide109 l.jpg

Multiple unicast transmissions

Receiver-1

C1[t]

P1,K1

P1,1

C2[t]

Receiver-2

Base

Station

Pf,Kf

Pf,1

CN[t]

Receiver-N

PF,KF

PF,1

  • F=Nand Receiver-i demands File-i, for i=1,…,N.

  • Interested in the mean completion time of all files.


Slide110 l.jpg

Optimal scheduling for symmetric channels

  • ci = c > 0, for alli=1,…,N.

  • With CSI: Optimal scheduling strategy for symmetric channel conditions is the Longest Connected Queue (LCQ) Policy [Tassiulas 93]:

 At every time slot, among all the receivers with ON channels: transmit a packet from the corresponding file with the maximum number of unserved packets.

  • Without CSI: Optimal scheduling strategy again Round Robin (RR) Scheduling.


Coding for general channel conditions l.jpg

Coding for general channel conditions

  • We seek answers to:

    • Can coding improve the performance?

    • Under what conditions?

    • How should it be implemented?

  • In particular, can it help to group files into classes?

c1

Receiver-1

File-1

c2

File-2

Receiver-2

c3

File-3

Receiver-3


Optimal coding ctrategy l.jpg

Result 3: The optimal coding strategy both in the presence or lack of CSI is to linearly combine packets within the same file, but to perform no coding across different files.

  • Scheduling is necessary even for the coding strategy.

Result 4a (With CSI): Under symmetric channel conditions, an asymptotically optimal coding strategy is to schedule file transmissions using the LCQ policy, and to transmit random linear combinations of the packets from the scheduled file.

Result 4b(Without CSI): Under symmetric conditions, an asymptotically optimal coding strategy is to schedule file transmissions using RR scheduling, and to transmit random linear combinations of the packets from the scheduled file.

Optimal coding ctrategy


Coding vs scheduling for unicast scenario l.jpg

Coding vs. scheduling for unicast scenario

  • In the lack of CSI, delay performance of scheduling is significantly worse than the coding strategy, while the channel capacities are the same.


Security with network coding l.jpg

Security with network coding

  • Byzantine reliability

  • Byzantine detection [Ho et al 04]

  • Correction of Byzantine attackers [Jaggi et al 05]


Byzantine reliability l.jpg

Byzantine reliability

  • Robustness against faulty/malicious components with arbitrary behavior, e.g.

    • dropping packets

    • misdirecting packets

    • sending spurious information

  • Abstraction as Byzantine generals problem [LSP82]

  • Byzantine robustness in networking [P88,MR97,KMM98,CL99]


Byzantine reliability116 l.jpg

Byzantine reliability

  • Distributed randomized network coding can be extended to detect Byzantine behavior

    • Small computational and communication overhead

    • small number of hash bits included with each packet, calculated as simple polynomial function of data

  • Require only that a Byzantine attacker does not design and supply modified packets with complete knowledge of other nodes’ packets

  • Let us use the approach of [HLKMEK04]


Byzantine reliability117 l.jpg

Byzantine reliability

  • Data symbols

  • Hash symbols

Trades off overhead with probability of detection


Performance of byzantine reliability l.jpg

Performance of Byzantine reliability

  • If the receiver gets s genuine packets, then the detection probability is at least

  • With 2% overhead (k = 50), code length=7, s = 5, the

    detection probability is 98.9%.

  • With 1% overhead (k = 100), code length=8, s = 5, the detection probability is 99.0%.


Wiretapping l.jpg

Wiretapping

  • Perfect information security against a wiretapper [CY02, FMSS04]

  • Has cost implications if we seek to have multiplicity of paths for security and not only for cost reduction


  • Login