# An Introduction to Network Coding - PowerPoint PPT Presentation

1 / 119

An Introduction to Network Coding. Muriel Médard Associate Professor EECS Massachusetts Institute of Technology Ralf Koetter Director Institute for Communications Engineering Technical University of Munich. Outline of course. An introduction to network coding: Network model

## Related searches for An Introduction to Network Coding

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.

An Introduction to Network Coding

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

## An Introduction to Network Coding

Muriel Médard

Associate Professor

EECS

Massachusetts Institute of Technology

Ralf Koetter

Director

Institute for Communications Engineering

Technical University of Munich

### Outline of course

• An introduction to network coding:

• Network model

• Algebraic aspects

• Delay issues

• Network coding for wireless multicast:

• Distributed randomized coding

• Erasure reliability

• Use of feedback

• Optimization in choice of subgraphs

• Distributed optimization

• Dealing with mobility

• Relation to compression

• Network coding in non-multicast:

• Algorithms

• Heuristics

• Security with network coding:

• Byzantine security

• Wiretapping aspects

### Network coding

• Canonical example [Ahslwede et al. 00]

• What choices can we make?

• No longer distinct flows, but information

s

b

b

1

2

b

b

t

u

1

2

w

b

b

1

2

x

y

z

### Network coding

• Picking a single bit does not work

• Time sharing does not work

• No longer distinct flows, but information

s

b

b

1

2

b

b

t

u

1

2

w

b

b

1

2

b

1

x

y

z

b

b

1

1

### Network coding

• Need to use algebraic nature of data

• No longer distinct flows, but information

s

b

b

1

2

b

b

t

u

1

2

w

b

b

1

2

b

+

b

1

2

x

y

z

b

+

b

b

+

b

1

2

1

2

[KM01, 02, 03]

### Network coding for multicast:

• Distributed randomized coding

• Erasure reliability

• Use of feedback

• Optimization in choice of subgraphs

• Distributed optimization

• Dealing with mobility

### Randomized network coding

• The effect of the network is that of a transfer matrix from sources to receivers

• To recover symbols at the receivers, we require sufficient degrees of freedom – an invertible matrix in the coefficients of all nodes

• The realization of the determinant of the matrix will be non-zero with high probability if the coefficients are chosen independently and randomly

• Probability of success over field F ≈

• Randomized network coding can use any multicast subgraph which satisfies min-cut max-flow bound for each receiver [HKMKE03, HMSEK03, WCJ03] for any number of sources, even when correlated [HMEK04]

Endogenous inputs

j

Exogenous input

### Erasure reliability

• Packet losses in networks result from

• congestion,

• buffer overflows,

• (in wireless) outage due to fading or change in topology

• Prevailing approach for reliability: Request retransmission

• Not suitable for

• high-loss environments,

• multicast,

• real-time applications.

### Erasure reliability

• Alternative approach: Forward Error Correction (FEC)

• Multiple description codes

• Erasure-correcting codes (e.g. Reed-Solomon, Tornado, LT, Raptor)

• End-to-end: Connection as a whole is viewed as a single channel; coding is performed only at the source node.

### Erasure reliability – single flow

• End-to-end erasure coding: Capacity is packets per unit time.

• As two separate channels: Capacity is packets per unit time.

• -Can use block erasure coding on each channel. But delay is a problem.

• Network coding: minimum cut is capacity

• - For erasures, correlated or not, we can in the multicast case deal with average flows uniquely [Lun et al. 04, 05], [Dana et al. 04]:

• - Nodes store received packets in memory

• Random linear combinations of memory contents sent out

• Delay expressions generalize Jackson networks to the innovative packets

• Can be used in a rateless fashion

### Feedback for reliability

• Parameters we consider:

• delay incurred at B: excess time, relative to

• the theoretical minimum, that it takes for k packets

• to be communicated, disregarding any delay due to

• the use of the feedback channel

• block size

• feedback: number of feedback packets used

• (feedback rate Rf = number of feedback messages / number of received packets)

• memory requirement at B

• achievable rate from A to C

### Feedback for reliability

Follow the approach of Pakzad et al. 05, Lun et al. 06

Scheme V allows us to achieve the

min-cut rate, while keeping the average memory

requirements at node B finite

note that the feedback delay for Scheme V is

smaller than the usual ARQ (withRf= 1) by a

factor of Rf

feedback is required only on link BC

[Fragouli et al. 07]

### Erasure reliability

• For erasures, correlated or not, we can in the multicast case deal with average flows uniquely [LME04], [LMK05], [DGPHE04]

• We consider a scheme [LME04] where

• nodes store received packets in memory;

• random linear combinations of memory contents sent out at every transmission opportunity (without waiting for full block).

• Scheme gets to capacity under arbitrary coding at every node for

• unicast and multicast connections

### Scheme for erasure reliability

• We have k message packets w1, w2, . . . , wk(fixed-length vectors over Fq) at the source.

• (Uniformly-)random linear combinations of w1, w2, . . . , wkinjected into source’s memory according process with rate R0.

• At every node, (uniformly-)random linear combinations of memory contents sent out;

• received packets stored into memory.

• in every packet, store length-kvector over Fqrepresenting the transformation it is of w1, w2, . . . , wk— global encoding vector.

### Coding scheme

• Since all coding is linear, can write any packet xas a linear combination

of w1, w2, . . . , wk:

• The vector γis the global encoding vector of x.

• We send the global encoding vector along with x, in its header, incurring a constant overhead.

• The side information provided by γis very important to the functioning of the scheme.

### Outline of proof

• Keep track of the propagation of innovative packets - packets whose auxiliary encoding vectors (transformation with respect to the n packets injected into the source’s memory) are linearly independent across particular cuts.

• Can show that, if R0 less than capacity and input process is Poisson, then propagation of innovative packets through any node forms a stable M/M/1 queueing system in steady-state.

• So, Ni, the number of innovative packets in the network is a time-invariant random variable with finite mean.

• We obtain delay expressions using in effect a generalization of Jackson networks for the innovative packets

• Particularly suitable for

• overlay networks using UDP, and

• wireless packet networks (have erasures and can perform coding at all nodes).

• Code construction is completely decentralized.

• Scheme can be operated ratelessly - can be run indefinitely until successful reception.

### Coding for packet losses - unicast

Average number of transmissions required per packet in random networks of varying size. Sources and sinks were chosen randomly according to a uniform distribution. Paths or subgraphs were chosen in each random instance to minimize the total number of transmissions required, except in the cases of end-to-end retransmission and end-to-end coding, where they were chosen to minimize the number of transmissions required by the source node. [Lun et al. 04]

### Explicit Feedback - Main Idea

V1

V2

V

V3

Vn

• Store linear combinations of original packets

• No need to store information commonly known at all receivers (i.e. V∆)

Vector spaces representing knowledge

V: Knowledge of sender

V∆: Common knowledge of all receivers

(Sundararajan et al 07)

Algorithm Outline

Separate out common knowledge

Incorporate channel state feedback

VΔ’(t)‏

Incorporate arrivals of slot (t-1)‏

V(t)‏

Vj(t)‏

VΔ (t)‏

V(t-1)

Vj'(t)‏

V(t-1)‏

Vj(t-1)‏

VΔ (t-1)‏

U’’(t)

Uj’’(t)‏

VΔ’(t)‏

Slot (t-1)‏

Slot (t+1)‏

Slot t

V(t-1): Knowledge of the sender after incorporating slot (t-1) arrivals;

Vj(t-1): Knowledge of receiver j at the end of slot (t-1);

Algorithm Outline

Separate out common knowledge

Incorporate channel state feedback

VΔ’(t)‏

Incorporate arrivals of slot (t-1)‏

V(t)‏

Vj(t)‏

VΔ (t)‏

V(t-1)

Vj'(t)‏

V(t-1)‏

Vj(t-1)‏

VΔ (t-1)‏

U’’(t)

Uj’’(t)‏

VΔ’(t)‏

Slot (t-1)‏

Slot (t+1)‏

Slot t

Vj’(t): Knowledge of receiver j after incorporating feedback;

Algorithm Outline

Separate out common knowledge

Incorporate channel state feedback

VΔ’(t)‏

Incorporate arrivals of slot (t-1)‏

V(t)‏

Vj(t)‏

VΔ (t)‏

V(t-1)

Vj'(t)‏

V(t-1)‏

Vj(t-1)‏

VΔ (t-1)‏

U’’(t)

Uj’’(t)‏

VΔ’(t)‏

Slot (t-1)‏

Slot (t+1)‏

Slot t

Remaining part of information:

Algorithm Outline

Separate out common knowledge

Incorporate channel state feedback

VΔ’(t)‏

Incorporate arrivals of slot (t-1)‏

V(t)‏

Vj(t)‏

VΔ (t)‏

V(t-1)

Vj'(t)‏

V(t-1)‏

Vj(t-1)‏

VΔ (t-1)‏

U’’(t)

Uj’’(t)‏

VΔ’(t)‏

Slot (t-1)‏

Slot (t+1)‏

Slot t

We can ensure that for all j:

Therefore, it is sufficient to store linear combinations corresponding to some basis of U’’(t)

Separate out common knowledge

Incorporate channel state feedback

UΔ’(t)‏

Incorporate arrivals of slot (t-1)‏

U(t)‏

Uj(t)‏

U(t-1)

Uj'(t)‏

U(t-1)‏

Uj(t-1)‏

U’’(t)

Uj’’(t)‏

UΔ’(t)‏

Slot (t-1)‏

Slot (t+1)‏

Slot t

### Incremental version of the vector spaces

• All the operations can be performed even after excluding the common knowledge from all the vector spaces, since it is not relevant any more. Define U(t) and Uj(t) such that

Separate out common knowledge

Incorporate channel state feedback

UΔ’(t)‏

Incorporate arrivals of slot (t-1)‏

U(t)‏

Uj(t)‏

U(t-1)

Uj'(t)‏

U(t-1)‏

Uj(t-1)‏

U’’(t)

Uj’’(t)‏

UΔ’(t)‏

Slot (t-1)‏

Slot (t+1)‏

Slot t

### Incremental version of the vector spaces

• Let Uj’(t) be the incremental knowledge of receiver j after including feedback

• Then the incremental common knowledge is :and we get

The Algorithm

Separate out common knowledge

Incorporate channel state feedback

UΔ’(t)‏

Incorporate arrivals of slot (t-1)‏

U(t-1)

Uj'(t)‏

U(t-1)‏

Uj(t-1)‏

U’’(t)

Uj’’(t)‏

UΔ’(t)‏

Slot t

In time slot t,

• Compute U(t-1) after including slot (t-1) arrivals into U’’(t-1)

• Using the feedback, compute Uj’(t)’s and hence U∆’(t)

• Compute a basis B∆ for U∆ ’(t).

• Extend this basis to a basis B for U(t-1).

Separate out common knowledge

Incorporate channel state feedback

UΔ’(t)‏

Incorporate arrivals of slot (t-1)‏

U(t-1)

Uj'(t)‏

U(t-1)‏

Uj(t-1)‏

U’’(t)

Uj’’(t)‏

UΔ’(t)‏

Slot t

The Algorithm

• Replace current queue contents with linear combinations of packets whose coefficient vectors are those in B\B∆

• Express all the vector spaces in terms of the new basis B\B∆. This basis spans U’’(t)

• Compute a linear combination g which is in U’’(t) but not in any of the Uj’’(t)’s (except if Uj’’(t)=U’’(t)). (This is possible iff the field size exceeds no. of receivers)

No. of arrivals in slot t

### Bounding the queue size

• Q(t): Physical queue size at the end of slot t

• Can be proved using following property of subspaces

where X+Y is span(X U Y).

No. of arrivals in slot t

### Bounding the queue size

• Q(t): Physical queue size at the end of slot t

• LHS : Physical queue size (the amount of the sender's knowledge that is not known at all receivers)

• RHS : Virtual queue size (sum of backlogs in linear degrees of freedom of all the receivers)

Uncoded Networks

Coded Networks

Knowledge represented by

Vector space spanned by the coefficient vectors of the received linear combinations

Amount of knowledge

Number of linearly independent (innovative) linear combinations of packets received (i.e., dimension of the vector space)‏

Queue stores

All undelivered packets

Linear combination of packets which form a basis for the coset space of the common knowledge at all receivers

Update rule after each transmission

If a packet has been received by all receivers – drop it.

Recompute the common knowledge space V∆; Compute a new set of linear combinations so that their span is independent of V∆.

### Summary: Uncoded vs. Coded

• All computations can be performed on incremental versions of the vector spaces (the common knowledge can be excluded everywhere) – hence complexity tracks the queue size

• Overhead for the coding coefficients depends on how many uncoded packets get coded together. This may not track the queue size

• Overhead can be bounded by bounding the busy period of the virtual queues

• Question: given this use of coding for queueing, how do we manage the network resources to take advantage of network coding?

### Optimizing resource use

• Enables network to operate with nodes behaving as honest middlemen negotiating with their direct business contacts

• Example: Distributed Bellman-Ford

• Can we use network coding to do distributed optimization?

(1,1,0)

Without network coding:

Integrality constraint

arises in time-sharing

between the blue

and red trees –

optimization

is NP-complete

b

1

b

1

b

+

b

(1,1,1)

1

2

(1,0,1)

(1,1,0)

b

(1,1,1)

1

b

+

b

2

1

(1,0,1)

b

(1,1,0)

(1,1,1)

2

b

+

b

1

2

b

(1,0,1)

2

b

2

### Relaxing the constraints on trees

(1,1,0)

(1,1,1)

(1,0,1)

(1,1,0)

(1,1,1)

(1,0,1)

(1,1,1)

(1,1,0)

(1,0,1)

source

=

sink

rather than on processes [Lunetal04]

### Wireless case

• Wireless systems have a multicast advantage

• Omnidirectional antennas: i → k implies i → j “for free”

• Same distributed approach holds

k

i

j

D

Convex: second derivative

is positive

Capacity Limit

### Optimization

• For any convex cost functions

• The vector z is part of a feasible solution for the optimization problem if and only if there exists a network code that sets up a multicast connection in the graph G at average rate arbitrarily close to R from source s to terminals in the set T and that puts a flow arbitrarily close to zijon each link (i, j)

• Proof follows from min-cut max-flow necessary and sufficient conditions

• Polynomial-time

• Can be solved in a distributed way [Lunetal05]

• Steiner-tree problem can be seen to be this problem with extra integrality constraints

### Distributed approach

• Consider the problem

• We have that is the bounded polyhedron of points x (t) satisfying the conservation of flow constraints and capacity constraints

[LRKMAL05]

### Distributed approach

• Solve subproblem in previous slide for each tin T

• We obtain a new updated price

• Use projection arguments to relate new price to old

• Use duality to recover coded flows from price

### Recovering the primal

• Problem of recovering primal from approximation of dual

• Use approach of [SC96] for obtaining primal from subgradient approximation to dual

• The conditions can be coalesced into a single algorithm to iterate in a distributed fashion towards the correct cost

• There is inherent robustness to change of costs, as in classical distributed Bellman-Ford approach to routing

### Extensions

• Can be extended to any strictly convex cost

• Primal-dual optimization

• Asynchronous, continuous-time algorithm

• Question: how many messages need to be exchanged for costs to converge?

### Can we set up a simplified graph?

• CODECAST [Park et al. 06]

• Every coded packet carries in the header three more fields, vldd, dist, and nust

• The one-bit field vldd is set if either the sender is a multicast receiver or has received a previous block packet with vldd bit set from one of the sender’s downstream nodes

• A node considers a neighboring node to be downstream if the neighboring node transmits a packet with a larger dist value than the dist value the node maintains

• Each node maintains as a local variable dist, indicating the hop distance from the multicast data source and copies its value to every code packet the node transmits.

• Every time a node transmits a coded packet, dist is recalculated as one plus the biggest dist value found in the headers of the packets which are combined to yield the coded packet

• Conversely, a node considers a neighboring node to be a upstream node if the neighboring node transmits a coded packet in a new block or a smaller dist value

• Each node also maintains nust, indicating the number of upstream nodes as a local variable and records its value in the header of every packet the node transmits

• A node broadcasts to the neighborhood r coded packets

• Compare with On Demand Multicast Routing Protocol (ODMRP) [Lee et al. 02]

### Compression using network coding

• We have heretofore assumed independent sources

• In sensor applications, we cannot assume such independence

• Can we make use of the correlation via network coding? [Hoetal06]

• Joint source and network coding [Lee et al. 07]

Given

where

Let

### Problem Formulation

where

desired subgraph:

desired subgraph cost:

### Joint vs. Separate

sum of rates on virtual links > R joint solution

= R  separate solution

Joint (cost 9) Separate (cost 10.5)

for each link (R = 3)

### Simulation: Random Networks

Network model:

• n nodes randomly placed in h x w box

• nodes within distance r connected by edge

• all edges have unit weight, no capacity constraints

sources along top edge

### Simulation Results

2 sources

H(Xi) = 2

H(X1,X2) = R

benefit of joint coding increases as network widens, and

sources are more correlated.

• Algorithms

• Heuristics

### Network coding – beyond multicast

• Traditional algorithms rely on routing, based on wireline systems

• Can we use incidental reception in an active manner, rather than treat it as useless or as interference

• Complementary technique with respect to schemes such UWB and MIMO that make use of additional, albeit possibly plentiful, resources over links

• Network coding seeks instead to use available energy and degrees of freedom fully over networks

a

### Network coding – beyond multicast

• Traditional algorithms rely on routing, based on wireline systems

• Can we use incidental reception in an active manner, rather than treat it as useless or as interference

• Complementary technique with respect to schemes such UWB and MIMO that make use of additional, albeit possibly plentiful, resources over links

• Network coding seeks instead to use available energy and degrees of freedom fully over networks

b

a

### Network coding – beyond multicast

• Traditional algorithms rely on routing, based on wireline systems

• Can we use incidental reception in an active manner, rather than treat it as useless or as interference

• Complementary technique with respect to schemes such UWB and MIMO that make use of additional, albeit possibly plentiful, resources over links

• Network coding seeks instead to use available energy and degrees of freedom fully over networks

a+b

a+b

a+b

b

a

b

a

Extend this idea [Katabi et al 05]

### Opportunism (1)

Opportunistic Listening:

• Every node listens to all packets

• It stores all heard packets for a limited time

• Node sends Reception Reports to tell its neighbors what packets it heard

• Reports are annotations to packets

• If no packets to send, periodically send reports

### Opportunism (2)

Opportunistic Coding:

• Each node uses only local information

• Use your favorite routing protocol

• To send packet p to neighbor A, XOR p with packets already known to A

• Thus, A can decode

• But how to benefit multiple neighbors from a single transmission?

### Efficient coding

Arrows show next-hop

A

D

C

B

### Efficient coding

C will get red pkt but A can’t get blue pkt

Arrows show next-hop

A

D

C

B

### Efficient coding

OK Coding

Both A & C get a packet

Arrows show next-hop

A

D

C

B

### Efficient coding

Best Coding

A, B, and C, each gets a packet

Arrows show next-hop

A

D

C

B

To XOR n packets, each next-hop should have the n-1 packets encoded with the packet it wants

### But how does a node know what packets a neighbor has?

• Reception Reports

• But reception reports may get lost or arrive too late

• Use Guessing

• If I receive a packet I assume all nodes closer to sender have received it

B heard S’s transmission

### Beyond fixed routes

• Piggyback on reception report to learn whether next-hop has the packet

• cancel unnecessary transmissions

D

A

B

S

No need for A transmission

S transmits

Route Chosen by Routing Protocol

Opportunistic Routing [BM05]

How to overhear

• Ideally, design a collision detection and back-off scheme for broadcast channels

• In practice, we want a solution that works with off-the-shelf 802.11 drivers/cards

Piggyback on 802.11 unicast which has collision detection and backoff

• Each XOR-ed packet is sent to the MAC address of one of the intended receivers

• Put all cards in promiscuous mode

Our Solution:

### Experiment

• 40 nodes

• 400mx400m

• Senders and receivers are chosen randomly

• Flows are duplex (e.g., ping)

• Metric:

Total Throughput of the Network

### Current 802.11

Net. Throughput (KB/s)

No Coding

1 2 4 6 8 10 12 14 16 18 20

Number of flows in experiment

### Opportunistic Listening & Coding

Net. Throughput (KB/s)

Opportunistic Listening & Coding

No Coding

1 2 4 6 8 10 12 14 16 18 20

Number of flows in experiment

Net. Throughput (KB/s)

With Opportunistic Routing

Opportunistic Listening & Coding

No Coding

1 2 4 6 8 10 12 14 16 18 20

Number of flows in experiment

### Our Scheme vs. Current (2-way Flows)

Net. Throughput (KB/s)

Our Scheme

No Coding

1 2 4 6 8 10 12 14 16 18 20

Huge throughput improvement, particularly at high congestion

Number of flows in experiment

### Completely random setting

• 40 nodes

• 400mx400m

• Senders and receivers are chosen randomly

• Flows are one way

• Metric:

Total Throughput of the Network

Our Scheme vs. Current (1-way Flows)

Net. Throughput (KB/s)

Our Scheme

No Coding

A Unicast Network Coding Scheme that Works Well in realistic Situations

Number of flows in experiment

### A principled optimization approach to match or outperform routing

• An optimization that yields a solution that is no worse than multicommodity flow

• The optimization is in effect a relaxation of multicommodity flow – akin to Steiner tree relaxation for the multicast case

• A solution of the problem implies the existence of a network code to accommodate the arbitrary demands – the types of codes subsume routing

• All decoding is performed at the receivers

• We can provide an optimization, with a linear code construction, that is guaranteed to perform as well as routing [Lun et al. 04]

gives a set partition of {1, . . . ,M} that represents the sources that can

be mixed (combined linearly) on links going into j

Demands of

{1, . . . ,M} at t

### Optimization

Optimization for arbitrary demands with decoding at receivers

### Coding and optimization

• Sinks that receive a source process in Cby way of link (j, i) either receive all the source processes in C or none at all

• Hence source processes in C can be mixed on link (j, i) as the sinks that receive the mixture will also receive the source processes (or mixtures thereof) necessary for decoding

• We step through the nodes in topological order, examining the outgoing links and defining global coding vectors on them (akin to [Jaggi et al. 03])

• We can build the code over an ever-expanding front

• We can go to coding over time by considering several flows for the different times – we let the coding delay be arbitrarily large

• The optimization and the coding are done separately as for the multicast case, but the coding is not distributed

### Delay

• Network coding we have shown to provide gains in throughput and energy consumption.

• However, its delay characteristics are not well understood.

• Let us consider a simple scenario, where the delay properties of transmission with and without coding can be investigated.

• Such gains can be best studied in a rateless transmission scenario such as file download [Eryilmaz et al. 06]

### System model

C1[t]

P1,K1

P1,1

C2[t]

Base

Station

Pf,Kf

Pf,1

CN[t]

PF,KF

PF,1

• Each file is requested by a subset of the receivers.

• ON/OFF:Ci [t]{0,1},i.i.d. over time slots and channels.

### Goal

• Completion time: Number of slots required to complete the download of all the files to all the interested receivers.

• Channel Side Information (CSI): denotes the availability ofC[t]=(C1[t],…,CN[t])before packet transmission decision.

• Our goal isto understand the mean completion time behavior of optimal strategies with and without coding and in the presence and lack of CSI.

(biKFq )

C1 [t]

b1K

bmK

C2 [t]

Base

Station

PK

P1

CN [t]

Single file (K packets)

• Each channel can accommodate a single packet in a time slot when it is ON.

• Ci[t]is Bernoulli distributed with meanci.

P3

P3

Coding,

q=2

Scheduling

P2

P2

P1

P1

### Scheduling versus coding

• Scheduling: In slot t, the base station is allowed to transmit packetP[t]  {P1,…,PK }.

• Linear Coding: In slot t, the base station is allowed to transmit packetP[t]satisfying

whereak[t] Fqfor allk {1,…,K}.

Result 1: The average completion time of the above asymptotically optimal coding strategy is given by

### Random coding with and without CSI

• Pick ak[t] uniformly at random from Fqfor each k and t.

• Each receiver needs K linearly independent combinations of the packets, {P1, …, PK}.

• On average, each channel must be ON times, which can be made arbitrarily close to K by choosing q large enough.

Result 2:The average completion time of the RR Scheduleis given by

for some(0.5,1).

### Optimal scheduling without CSI

• We assume that the base station gets feedback from a receiver only when it receives the whole file.

• For symmetric channel conditions, i.e. ci = cfor alli, then Round Robin (R) Schedulingis optimal.

s2

e

s1

Base

Station

PK

PK

PK

PK-1

P2

P2

P1

P1

P1

P1

PK

P1

PK

P2

P3

P2

P3

P1

P3

P3

P2

P1

P2

P1

P2

P1

P2

P3

P1 + P2 + P3

P3

P1 + P2 + P3

P1

P1

P2

P1 + P2 + P3

### Heuristic scheduling policy with CSI

• The optimal policy is characterized using Dynamic Programming.

• The size of the state space grows exponentially with N, K.

• We study the following heuristic policy that has computational complexity comparable to the coding strategy.

• Heuristic Broadcast Scheduling: In slot t, transmit the packet that is demanded by the largest number of receivers that are ON in that slot.

### Coding vs. scheduling for broadcast scenario

Multiple unicast transmissions

C1[t]

P1,K1

P1,1

C2[t]

Base

Station

Pf,Kf

Pf,1

CN[t]

PF,KF

PF,1

• F=Nand Receiver-i demands File-i, for i=1,…,N.

• Interested in the mean completion time of all files.

Optimal scheduling for symmetric channels

• ci = c > 0, for alli=1,…,N.

• With CSI: Optimal scheduling strategy for symmetric channel conditions is the Longest Connected Queue (LCQ) Policy [Tassiulas 93]:

 At every time slot, among all the receivers with ON channels: transmit a packet from the corresponding file with the maximum number of unserved packets.

• Without CSI: Optimal scheduling strategy again Round Robin (RR) Scheduling.

### Coding for general channel conditions

• Can coding improve the performance?

• Under what conditions?

• How should it be implemented?

• In particular, can it help to group files into classes?

c1

File-1

c2

File-2

c3

File-3

Result 3: The optimal coding strategy both in the presence or lack of CSI is to linearly combine packets within the same file, but to perform no coding across different files.

• Scheduling is necessary even for the coding strategy.

Result 4a (With CSI): Under symmetric channel conditions, an asymptotically optimal coding strategy is to schedule file transmissions using the LCQ policy, and to transmit random linear combinations of the packets from the scheduled file.

Result 4b(Without CSI): Under symmetric conditions, an asymptotically optimal coding strategy is to schedule file transmissions using RR scheduling, and to transmit random linear combinations of the packets from the scheduled file.

### Coding vs. scheduling for unicast scenario

• In the lack of CSI, delay performance of scheduling is significantly worse than the coding strategy, while the channel capacities are the same.

### Security with network coding

• Byzantine reliability

• Byzantine detection [Ho et al 04]

• Correction of Byzantine attackers [Jaggi et al 05]

### Byzantine reliability

• Robustness against faulty/malicious components with arbitrary behavior, e.g.

• dropping packets

• misdirecting packets

• sending spurious information

• Abstraction as Byzantine generals problem [LSP82]

• Byzantine robustness in networking [P88,MR97,KMM98,CL99]

### Byzantine reliability

• Distributed randomized network coding can be extended to detect Byzantine behavior

• Small computational and communication overhead

• small number of hash bits included with each packet, calculated as simple polynomial function of data

• Require only that a Byzantine attacker does not design and supply modified packets with complete knowledge of other nodes’ packets

• Let us use the approach of [HLKMEK04]

• Data symbols

• Hash symbols

### Performance of Byzantine reliability

• If the receiver gets s genuine packets, then the detection probability is at least

• With 2% overhead (k = 50), code length=7, s = 5, the

detection probability is 98.9%.

• With 1% overhead (k = 100), code length=8, s = 5, the detection probability is 99.0%.

### Wiretapping

• Perfect information security against a wiretapper [CY02, FMSS04]

• Has cost implications if we seek to have multiplicity of paths for security and not only for cost reduction