Loading in 2 Seconds...
Loading in 2 Seconds...
Cooperative Communications in Networks: Random coding for wireless multicast . Performance for fading channels and inter-session coding. Brooke Shrader and Anthony Ephremides University of Maryland October, 2008. Joint work with Randy Cogill, University of Virginia.
Performance for fading channels and inter-session coding
Brooke Shrader and Anthony Ephremides
University of Maryland
Joint work with Randy Cogill, University of Virginia
repeatedly send packets (ARQ)
network codingIntroduction and Motivation
where ai are generated randomly and uniformly from u-ary
alphabet and Σ is sum in finite field.
If ai = 0 for all i, generate new coefficients.
Random linear coding for multicast
Form random linear combinations of K packets.
Transmit coefficients ai in packet header
Decode: solve a system of linear equations in si
Random linear coding
Random linear coding approaches capacity as K→∞.
Policy: fix K, the number of packets in coding
Time spent waiting to form a group of K
Multicast: 3 destinations, q=0.9
Policy: let k, number of packets in coding, vary according to traffic load
1 ≤ k ≤ K
Unicast: 1 destination, q=0.5
As K →∞, delay performance approaches retransmission delay
Coding across sessions means that receivers decode additional packets that aren’t intended for them.Recent/Ongoing work
Let Tm denote the number of slots needed for destination
m to collect K linearly independent random linear combinations.
The multicast throughput is:
Difficulty: Tm are correlated due to correlation in the random linear combinations sent to different destination nodes.
This is true even if the channels to the destination nodes are independent.
Assume: the channels to the M destinations are identically distributed (but not independent).
For random variables X1,X2,…,XM identically distributed
and correlated and for any t > 0,
Then the multicast throughput is lower bounded, for any t > 0, as
Random network coding:overhead needed to transmit coefficients of random code.
Packet length (symbols per packet & alphabet size) must be sufficiently large in order to:
approach min-cut capacity
ensure small (fractional) overheadI: Packet length and overhead
Our approach: model the packet erasure probability as a function of packet length (symbols per packet and alphabet size).
Assume that there is no channel coding within packets. For packet to be received, every symbol must be received.
q: Probability that a transmitted packet is successfully received at a destination node.
Pu: u-ary symbol error probability for modulation scheme,
depends on SNR, channel model (e.g., AWGN)
Each packet consists of nu-ary symbols.
Coding is performed on groups of K packets.
The multicast throughput is lower bounded, for any t > 0, as
ratio of information to information+overhead
The channel to each destination node evolves as a Markov chain with “Good” and “Bad” states.
qG: Probability that a transmitted packet is received in “Good” state
qB: Probability that a transmitted packet is received in “Bad” state
A packet-erasure version of the Gilbert channel model.
State (S,j) where S is “Good” or “Bad” state and j=0,1,…K is the number of linearly independent random linear combinations that have been received.
P: transition probability matrix
Assume qB=0, so initial state is always S=G.
Transmission time T1: time to reach state (S,K) from (G,0).
Compare to time-invariant channel with probability of reception
M=10, n=250, u=8
QAM modulation over AWGN channel with SNR/bit 3.5 dB in “Good” state and -∞ dB in “Bad” state.
Random linear coding: create random linear combinations from the K head-of-line packets, one from each session.
For successful decoding, each destination must decode the packets from all K multicast sessions.
Using bound on E[max(X1,X2,…XMK)], we bound the throughput as
--- Retransmissions (upper bound)II: Multicast throughput for coding across sessions
For large number of sessions and receivers per session, coding outperforms retransmissions