1 / 22

Connections between network coding and stochastic network theory

Connections between network coding and stochastic network theory. Bruce Hajek.

lara
Download Presentation

Connections between network coding and stochastic network theory

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Stochastic Networks Conference, June 19-24, 2006 Connections between network coding and stochastic network theory Bruce Hajek Abstract: Randomly generated coded information blocks form the basis of novel coding techniques used in communication networks. The most studied case involves linear coding, in which coded blocks are generated as linear combinations of data blocks, with randomly chosen coefficients. Digital fountain codes, including Luby’s LT codes, and Shokrollahi ‘s Raptor codes, involve coding at the source, while network coding involves coding within the network. Recently Maneva and Shokrollahi found a connection between the analysis of digital fountain codes, and fluid and diffusion limit methods, such as in the work of Darling and Norris. A simplified description of the connection is presented, with implications for code design. (Background reading can be found at http://courses.ece.uiuc.edu/ece559/spring06BH/ )

  2. Orientation Stochastic Networks Conference, June 19-24, 2006 On Thursday, Ralf Koetter discussed network coding: coding within the network This talk will be limited to the topic of coding at source nodes, which is much further towards real world use. (See www.digitalfountain.com) Stochastic network theory can be helpful in learning how to apply such codes in networks (e.g. tradeoff between buffer size and erasure protection) and in designing such codes. This talk focuses on the later.

  3. Stochastic Networks Conference, June 19-24, 2006 Outline • Multicast, and linear codes for erasures • Luby’s LT codes • Markov performance analysis for Poisson model • Fluid approximation and an application • Diffusion approximation and an application

  4. Stochastic Networks Conference, June 19-24, 2006 Multicast with lost packets S1 S2 Sk source message: k symbols: (fixed length binary strings) Symbols are repeatedly broadcast in random order to many receivers, but can be lost. Each receiver tries to collect a complete set, for example: S1 S4 S5 Sk-2 Sk

  5. Stochastic Networks Conference, June 19-24, 2006 Multicast with coding at source, and lost packets Source message: k symbols: (fixed length binary strings) S1 S2 Sk S1 S2 Sm Source forms m coded symbols: Symbols are repeatedly broadcast in random order to many receivers, but can be lost. Each receiver tries to collect enough distinct coded packets: S1 S4 S5 Sm-2 Sm For a good code, only k or a few more coded packets are enough. If m > > k, then problem of duplicates at receiver is reduced.

  6. S1 0101 S1 0101 S1 ? S1 ? S2 0111 S2 0111 S2 ? S2 0111 S3 ? S3 1101 S3 ? S3 1101 S4 0001 S4 0001 S4 ? S4 ? + + + + + + + + + + + + + + + + + + + + S1 0101 S1 ? S2 0111 S2 0111 S3 1101 S3 1101 S4 0001 S4 0001 S1 ? S1 ? S2 0111 S2 0111 S3 ? S3 ? S4 ? S4 0001 1111 1111 1111 0111 0111 0111 1100 1100 1100 0110 0110 0110 0110 0110 0110 S1 ? S1 ? S2 0111 S2 0111 S3 1101 S3 ? S4 0001 S4 0001 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 0101 0101 0111 0111 0001 0001 1101 1101 0001 0001 1000 1000 0111 0111 0001 0001 1100 1100 0001 0001 1000 1000 0111 0111 0001 0001 1101 1101 0001 0001 Stochastic Networks Conference, June 19-24, 2006 Linear coding and greedy decoding Source symbols: Received payloads:

  7. Stochastic Networks Conference, June 19-24, 2006 Greedy decoding can get stuck: S1 ? S2 ? S3 ? Source symbols: + + + 1000 0111 0001 Received payloads: Nevertheless, we will stick to using greedy decoding.

  8. Stochastic Networks Conference, June 19-24, 2006 LT codes (M. Luby) random, rateless, linear codes Given the number of source symbols k, and a probability distribution on {1,2, . . . , k}, a coded symbol is generated as follows: e.g., k=8 Step one: select a degree d with the given probability distribution. e.g., d=3. Step two: select a subset of {1, . . . , k} of size d. e.g., {3,5,6} code vector 00101100 Step three: form the sum of the source symbols with indices in the random set e.g., form S3+S5+S6 The resulting coded symbol can be transmitted along with its code vector.

  9. Stochastic Networks Conference, June 19-24, 2006 Ideal soliton distribution for degree (Luby) Ideally we could recover all k source symbols from k coded symbols. --The ideal soliton distribution doesn’t work well due to stochastic fluctuations. Luby introduced a robust variation (see LT paper) --Raptor codes use precoding plus LT coding to help at the end.

  10. Study Poisson case: Stochastic Networks Conference, June 19-24, 2006 Analysis A coded symbol with reduced degree one (which has not been processed yet) is said to be in the gross ripple. Let Xv denote the number of symbols in the gross ripple after v symbols have been decoded. Decoding successful if and only if ripple is nonempty through step k-1.

  11. j-2 j + Degree j coded packet Stochastic Networks Conference, June 19-24, 2006 Examine arrival process for the gross ripple next symbol decoded specific symbol to be decoded later v v+1 1 k ??? + Degree j coded packet

  12. 1 1 1 1 xt t 0 1 Stochastic Networks Conference, June 19-24, 2006 Evolution of gross ripple 1

  13. Stochastic Networks Conference, June 19-24, 2006 Intermediate Performance An application of fluid limit (S. Sanghavi’s poster, this conference) Let K = # input symbols. If # received coded symbols is rK, for r<1, then the number of inputs recovered is tK where t Example: for S. Sanghavi investigated maximizing t with respect to the degree distribution, for r fixed. r

  14. Stochastic Networks Conference, June 19-24, 2006 Sujay’s bounds upper bound upper and lower bounds t t r r

  15. xt t 0 1 Stochastic Networks Conference, June 19-24, 2006 Next: diffusion analysis

  16. Stochastic Networks Conference, June 19-24, 2006 Diffusion limit of gross ripple Recall: Let

  17. Stochastic Networks Conference, June 19-24, 2006 Design based on diffusion limit The diffusion limit result suggests the representation: which in turn suggests guidelines for the degree distribution:

  18. Stochastic Networks Conference, June 19-24, 2006 The net ripple (used by Luby in LT paper) The net ripple is the set of input symbols represented in the gross ripple. Let Yv denote the number of symbols in the gross ripple after v symbols have been decoded. Decoding successful if and only if the net ripple is nonempty through step k-1.

  19. Stochastic Networks Conference, June 19-24, 2006 The free ripple (understanding fluid approximation) Suppose that input symbols are revealed one at a time by a genie, independently of the packets received. The degrees of coded symbols are decreased as the genie reveals symbols. The free ripple is the set of coded symbols of reduced degree one in such system.

  20. Stochastic Networks Conference, June 19-24, 2006

  21. Stochastic Networks Conference, June 19-24, 2006 Conclusions • Coding at the source, network coding, and peer-to-peer networking (gossip type algorithms) provide a rich source of network design and analysis problems. • Iterating design and analysis can lead to interesting tractable problems (as in LT code design) Thanks!

  22. Stochastic Networks Conference, June 19-24, 2006

More Related