1 / 0

Shannon’s Theorems

Shannon’s Theorems. First theorem : H ( S ) ≤ L n ( S n )/ n < H ( S ) + 1/ n where L n is the length of a certain code.

gus
Download Presentation

Shannon’s Theorems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Shannon’s Theorems First theorem: H(S) ≤Ln(Sn)/n < H(S) + 1/n where Ln is the length of a certain code. Second theorem: extends this idea to a channel with errors, allowing one to reach arbitrarily close to the channel capacity while simultaneously correcting almost all the errors. Proof: it does so without constructing a specific code, and relies instead on a random code.
  2. M distinct equiprobable n-bit blocks A = {ai : i = 1, …, M} B = {bj : |bj| = n, j = 1, …, 2n} Intuitively, each block comes through with n∙C bits of information. P Q Q P

    Random Codes

    I2(ai) = log2M C = 1 − H2(Q) Q < ½ small number ε > 0 To signal close to capacity, we want I2(ai) = n (C− ε) Send an n-bit block code through a binary symmetric channel: intuitively, # of messages that can get thru channel by increasing n, this can be made arbitrarily large  we can choose M so that we use only a small fraction of the # of messages that could get thru – redundancy. Excess redundancy gives us the room required to bring the error rate down. For a large n, pick Mrandom codewords from {0, 1}n. 10.4
  3. Similarly, around each bj: What us the probability that an uncorrectable error occurs? Consider a sphere on radius n(Q+ε′) about each ai: With high probability, almost all ai will be a certain distance apart (provided M « 2n). Picture the ai in n-dimensional Hamming space. As each ai goes thru channel, we expect nQ errors on average. nε′ too much noise another a′ is also inside bj received symbol By the law of large numbers, ai nQ sent symbol ai a′ ai bj can be made « δ nQ nε′ 10.4
  4. Idea Pick # of code words M to be 2n(C−ε) where C is the channel capacity (the block size n is as yet undetermined and depends on how close ε we wish to approach the channel capacity). The number of possible random codes = (2n)M = 2nM, each equally likely. Let PE = the probability of errors averaged over all random codes. The idea is to show that PE → 0. I.e. given any code, most of the time it will probably work!
  5. Proof Suppose a is what’s sent, and b what’s received. Let X = 0/1 be a random variable representing errors in the channel, with probability P/Q. So if the error vector a  b = (X1, …, Xn), then d(a, b) = X1 + … + Xn. (by law of large numbers) N. B. Q = E{X}  Q < ½ , pick ε′  Q + ε′ < ½ 10.5
  6. Since the a′ are randomly (uniformly) distributed throughout, by the binomial bound volume of whole space Chance that some particular code word lands too close. Chance that any one is too close. N.b. e = log2(1/Q–1) > 0, so we can choose ε′e < ε.  10.5
More Related