Information theory
This presentation is the property of its rightful owner.
Sponsored Links
1 / 41

INFORMATION THEORY PowerPoint PPT Presentation

INFORMATION THEORY. Communication theory deals with systems for transmitting information from one point to another. Information theory was born with the discovery of the fundamental laws of data compression and transmission. Introduction

Download Presentation

INFORMATION THEORY

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Information theory

INFORMATION THEORY

  • Communication theory deals with systems for transmitting information from one point to another.

  • Information theory was born with the discovery of the fundamental laws of data compression and transmission.

INFORMATION THEORY


Information theory lecture notes powerpoint

  • Introduction

    Information theory answers two fundamental questions:

  • What is the ultimate data compression?

    Answer: The Entropy H.

  • What is the ultimate transmission rate?

    Answer: Channel Capacity C.

    But its reach is beyond Communication Theory. In early days it was thought that increasing transmission rate over a channel increases the error rate. Shannon showed that this is not true as long as rate is below Channel Capacity.

    Shannon has further shown that random processes have an irreducible complexity below which they can not be compressed.

INFORMATION THEORY


Information theory lecture notes powerpoint

Information Theory (IT) relates to other fields:

  • Computer Science: shortest binary program for computing a string.

  • Probability Theory: fundamental quantities of IT are used to estimate probabilities.

  • Inference: approach to predict digits of pi. Infering behavior of stock market.

  • Computation vs. communication: computation is communication limited and vice-versa.

INFORMATION THEORY


Information theory lecture notes powerpoint

It has its beginning at the start of the century but it really took of after

WW II.

  • Weiner: extracting signals of a known ensemble from noise of a

    predictable nature.

  • Shannon: encoding messages chosen from a known ensemble so that they can be transmitted accurately and rapidly even in the presence of noise.

    IT: The study of efficient encoding and its consequences in the form of speed of transmission and probability of error.

INFORMATION THEORY


Information theory lecture notes powerpoint

  • Historical Perspective

  • Follows S. Verdu, “Fifty years of Shannon Theory,” IT-44, Oct. 1998, pp. 2057-2058.

  • Shannon published “ A mathematical theory of communication” in 1948. It lays down fundamental laws of data compression and transmission.

  • Nyquist (1924): transmission rate is proportional to the log of the number of levels in a unit duration.

    - Can transmission rate be improved by replacing Morse by an ‘optimum’ code?

  • Whitlaker (1929): loseless interpolation of bandlimited function.

  • Gabor (1946): time-frequency uncertainty principle.

INFORMATION THEORY


Information theory lecture notes powerpoint

  • Hartley (1928): muses on the physical possibilities of transmission rates.

    - Introduces a quantitative measure for the amount of information associated with n selection of states.

    H=n log s

    where s = symbols available in each selection.

    n = # of selections.

    - Information = outcome of a selection among a finite number of possibilities.

INFORMATION THEORY


Information theory lecture notes powerpoint

  • Data Compression

  • Shannon uses the definition of entropy

    as a measure of information.

    Rationals: (1) continuous in prob.

    (2) increasing with n for equiprobable r.v.

    (3) additive – entropy of a sum of r.v. is equal to the sum of entropies of the individual r.v.

  • Entropy satisfies for memoryless sources:

    Shannon Theorem 3: Given any and , we can find No such that sequences of any length fall into two classes:

    (1) A set whose probability is less than

    (2) The reminder set, all of whose members have probabilities {p} satisfying

INFORMATION THEORY


Information theory lecture notes powerpoint

  • Reliable Communication

  • Shannon: …..redundancy must be introduced to combat the particular noise structure involved … a delay is generally required to approach the ideal encoding.

  • Defines channel capacity

  • It is possible to send information at the rate C through the channel with as small a frequency of errors or equivocation as desired by proper encoding. This statement is not true for any rate greater than C.

  • Defines differential entropy of a continuous random variable as a formal analog to the entropy of a discrete random variable.

  • Shannon obtains the formula for the capacity of:

    - power-constrained

    - white Gaussian channel

    - flat transfer function

INFORMATION THEORY


Information theory lecture notes powerpoint

  • The minimum energy necessary to transmit one bit of information is 1.6 dB below the noise psd.

  • Some interesting points about the capacity relation:

    - Since any strictly bandlimited signal has infinite duration, the rate of information of any finite codebook of bandlimited waveforms is equal to zero.

    - Transmitted signals must approximate statistical properties of white noise.

  • Generalization to dispersive/nonwhite Gaussian channels by Shannon’s “water-filling” formula.

  • Constraints other then power constraints are of interest:

    - Amplitude constraints

    - Quantized constraints

    - Specific modulations.

INFORMATION THEORY


Information theory lecture notes powerpoint

  • Zero-Error Channel Capacity

  • Example of typing a text: a non-zero probability of making an error with the prob. = 1 as the length increases.

  • By designing a code that takes into account the statistics of the typist’s mistakes, the prob. of error can be made 0.

  • Example: consider mistakes made by mistyping neighboring letters. The alphabet { b, I, t, s} has no neighboring letters, hence will have zero probability of error.

  • Zero-error capacity: the rate at which information can be encoded with zero prob. of error.

INFORMATION THEORY


Information theory lecture notes powerpoint

  • Error Exponent

  • Rather than focus on the channel capacity, study the error probability (EP) as a function of block length.

  • Exponential decrease of EP as a function of blocklength in Gaussian, discrete memoryless channel.

  • The exponent of the minimum achievable EP is a function of the rate referred to as reliability function.

  • An important rate that serves as lower bound to the reliability function is the cutoff rate.

  • Was long thought to be the “practical” limit to transmission rate.

  • Turbo codes refuted that notion.

INFORMATION THEORY


Information theory lecture notes powerpoint

  • ERROR CONTROL MECHANISMS

  • Error Control Strategies

  • The goal of ‘error-control’ is to reduce the effect of noise in order to reduce or eliminate transmission errors.

  • ‘Error-Control Coding’ refers to adding redundancy to the data. The redundant symbols are subsequently used to detect or correct erroneous data.

INFORMATION THEORY


Information theory lecture notes powerpoint

  • Error control strategy depends on the channel and on the specific application.

    - Error control for one-way channels are referred to as forward error control (FEC). It can be accomplished by:

    * Error detection and correction – hard detection.

    * Reducing the probability of an error – soft detection

    - For two-way channels: error detection is a simpler task that error correction. Retransmit the data only when an error is detected: automatic request (ARQ).

  • In the course, we focus on wireless data communications, hence we will not delve in error concealment techniques such as interpolation, used in audio and video recording.

  • Error schemes may be priority based, i.e., providing more protection to certain types of data that others. For example, in wireless cellular standards, the transmitted bits are divided in three classes: bits that get double code protection, bits that get single code protection, and bits that are not protected.

INFORMATION THEORY


Information theory lecture notes powerpoint

  • Block and Convolutional Codes

  • Error control codes can be divided into two large classes: block codes and convolutional codes.

  • Information bits encoded with an alphabet Q of q distinct symbols.

  • Designers of early digital communications system tried to improve reliability by increasing power or bandwidth.

INFORMATION THEORY


Information theory lecture notes powerpoint

  • Shannon taught us how to buy performance with a less expensive resource: complexity.

  • Formal definition of a code C: a set of 2k n-tuples.

  • Encoder: the set of 2k pairs (m,c), where m is the data word and c is the code word.

  • Linear code: the set of codewords is closed under modulo-2 addition.

  • Error detection and correction correspond to terms in the Fano inequality:

    - Error detection reduces

    - Error correction reduces

INFORMATION THEORY


Information theory lecture notes powerpoint

  • BASIC DEFINITIONS

    Define Entropy, Relative Entropy, Mutual Information

  • Entropy, Mutual Information

    A measure of uncertainty of a random variable.

    Let x be a discrete random variable (r.v.) with alphabet

    A and probability mass p(x) = Pr {X=x}.

  • (D1) The entropy H(x) of a discrete r.v. x is defined

    bits

    where log is to the base 2.

  • Comments: (1) simplest example: entropy of a fair coin = 1bit.

    (2) Adding terms of zero probability does not change entropy (0log 0 = 0).

    (3) Entropy depends on probabilities of x, not on actual values.

    (4) Entropy is H(x) = E [ log 1/p(x) ]

INFORMATION THEORY


Information theory lecture notes powerpoint

  • Properties of Entropy

  • (P1) H(x) 0

    0 p(x) 1 log [ 1/p(x) ] 0

  • [E] x = 1 p

    0 1-p

    H(x) = - p log p – (1-p) log(1-p)

    = H(p)

INFORMATION THEORY


Information theory lecture notes powerpoint

  • [E] x = a 1/2

    b 1/4

    c 1/8

    d 1/8

    H(x) = ½log½ - ¼log¼ - 1/8log 1/8 - 1/8log 1/8 = 1.75 bits

  • Another interpretation of entropy

    Use minimum number of questions to determine value of X:

    Is X=a

    no

    Is X=b

    no

    Is X=c

    It turns out that the expected number of binary questions is 1.75.

INFORMATION THEORY


Information theory lecture notes powerpoint

  • (D2) The joint entropy H(X,Y) is defined

  • (D3)Conditional entropy H(Y|X)

INFORMATION THEORY


Information theory lecture notes powerpoint

  • (P2)

    Entropy: A measure of uncertainty of a r.v.

    The amount of information required on the average to

    describe the r.v.

    Relative entropy: A measure of the distance between two distributions.

INFORMATION THEORY


Information theory lecture notes powerpoint

  • (D4)Relative entropy or Kullback Leibler distance between two probability mass functions p(x) and q(x) is defined

    Relative entropy is 0 iff p=q

    Mutual information: A measure of the amount of information one r.v. contains about another r.v..

  • (D5) Given two r.v. , and marginal distributions p(x), p(y), the mutual information is the relative entropy between the joint distribution p(x,y) and the product distribution p(x)p(y):

INFORMATION THEORY


Information theory lecture notes powerpoint

  • (E)

    In general

  • Properties of MI:

INFORMATION THEORY


Information theory lecture notes powerpoint

  • (P1) I(X,Y) = H(X) – H(X,Y)

    Interpretation: Mutual Information (MI) is the reduction in the uncertainty of X due to the knowledge of Y.

    X says about Y as much as Y says about X:

  • (P2) I(X,Y) = H(Y) – H(Y|X) = I(Y,X)

  • (P3) I(X,X) = H(X) (no reduction of certainly)

    Since H(X,Y) = H(X) + H(Y|X) (chain rule), it follows that

    H(Y|X) = H(X,Y) – H(X), hence

  • (P4) I(X,Y) = H(X) + H(Y) – H(X,Y)

INFORMATION THEORY


Information theory lecture notes powerpoint

  • Multiple Variables – Chain Rules

    In this section, some of the results of the previous section are extended to multiple variables.

  • (T1)Chain Rule for Entropy:

    Let

  • (D6) The conditional mutual information of random variables X and Y given Z is defined by

    I(X;Y|Z) = H(X,Z) – H(X|Y,Z)

INFORMATION THEORY


Information theory lecture notes powerpoint

  • (T2) Chain rule for mutual information:

    This can be generalized to arbitrary n.

INFORMATION THEORY


Information theory lecture notes powerpoint

  • (D7) The conditional relative entropy D(p(y|x)||q(y|x)) is the relative entropy between the corresponding conditional distributions averaged over x:

  • (T3) Chain rule for relative entropy

    Proof:

INFORMATION THEORY


Information theory lecture notes powerpoint

  • Jensen’s Inequality

  • (D8) f(x) is convex over interval (a,b) if

    Strictly convex if the strict inequality holds.

  • (D9) f(x) is concave if –f is convex.

    Convex function always lies below any chord (straight line connecting two points on the curve). Convex function are very important in I.T..

INFORMATION THEORY


Information theory lecture notes powerpoint

Simple results for convex functions:

  • (T4) If the function is convex

    Proof:

    Taylor Expansion:

    where

    let

    Since the last term in the Taylor expansion is non-negative,

    Similarly

    Using

    The relation meets the definition of a convex function.

INFORMATION THEORY


Information theory lecture notes powerpoint

  • (T5)(Jensen’s Inequality)

    (1) If f is convex and X is a r.v.

    (2) If f is strictly convex and E f(X) = f(EX)

    then, X = EX, i.e., X is a constant.

    Proof: Let the number of discrete points be 2: X1, X2

    From the definition of convex functions:

    Induction: suppose the theorem is true for k-1 points.

    let , this makes pi a set of probabilities.

    From Jensen’s inequality follow a number of fundamental IT theorems

INFORMATION THEORY


Information theory lecture notes powerpoint

  • (T6)(Information inequality) p(X), q(x)

    With equality iff

    Proof:

    If , equality is clearly obtained.

    If equality holds, it means that (since the log is strictly concave).

INFORMATION THEORY


Information theory lecture notes powerpoint

  • (T7)(Non-negativity of MI)

    I(X;Y) = 0 iff X, Y are independent.

    Proof: Follows from the relation From the information inequality, the equality holds iff

    p(x,y) = p(x)p(y), i.e., X, Y are independent.

    Let |A| be the number of elements in the set A.

  • (T8) iff

    X has a uniform distribution over A.

    Proof: Let u(X) = 1/ |A| be the uniform distribution.

    Interpretation: uniform distribution achieves maximum entropy.

INFORMATION THEORY


Information theory lecture notes powerpoint

  • (T9)(Conditioning reduces entropy)

    H(X|Y) = H(X) iff X and Y are independent

    Proof: It follows from

    Interpretation: on the average, knowing about Y can only reduce the uncertainty about X.

    The uncertainty of X is decreased if Y=1 is observed, it is increased if Y=2 is observed, and is decreased on the average.

INFORMATION THEORY


Information theory lecture notes powerpoint

  • (T10)(Independence bound for entropy)

    equality iff Xi are independent.

    Proof:

    Chain Rule

    T9

INFORMATION THEORY


Information theory lecture notes powerpoint

  • (T11)(Log Sum Inequality)

    Equality iff ai/bi = const

    Proof:

    is strictly convex since its second

    derivative > 0, hence by Jensen’s inequality

INFORMATION THEORY


Information theory lecture notes powerpoint

  • (T12)Convexity of Relative Entropy

    Proof: log sum inequality

  • (T13)Concavity of Entropy H(p) is a concave function of p.

    Proof: H(p) = log A  - D(p||u)

    since D is convex , H is concave.

  • (D10) The r.v. X, Y, Z form a Markov Chain XY Z if

    (z is conditionally independent of x)

INFORMATION THEORY


Information theory lecture notes powerpoint

  • (T14)Data Processing Inequality:

    If X Y Z, then

    I(X;Y) I(X;Z)

    Proof: Chain Rule for information (T2 slide 46)

    I(X;Y,Z) = I(X;Z) + I(X;Y|Z)

    also I(X;Y,Z) = I(X;Y) + I(X;Z|Y)

    since X, Z are independent given Y , I(X;Z|Y) = 0

    It follows

    Equality iff I(X;Y|Z) = 0 i.e. X Y Z also form a Markov Chain.

    In particular if Z = g(Y) we have

    A function of the data Y can not increase the information about X.

INFORMATION THEORY


Information theory lecture notes powerpoint

  • Application – Sufficient Statistic

    Use data processing inequality to clarify idea of sufficient statistic.

  • (D11) A function T(X) is a sufficient statistic relative to the family

    if

    X is independent of given T(X), i.e. ,

    T(X) provides all info on .

    In general, we have

    a family of distributions, X a sample from a dist.

    T(X) a function of the sample.

    Hence, by the data processing inequality

    For a sufficient statistic which means that MI is

    preserved.

INFORMATION THEORY


Information theory lecture notes powerpoint

  • Example

    (1) the distribution

    parameter is

    define

    How to show independence of X and ? Show that given T, all sequences with k ones are equally likely, independent of .

    prob. of k out of n

    otherwise

    thus

    forms a Markov chain and T is a sufficient statistic.

INFORMATION THEORY


Information theory lecture notes powerpoint

  • Fano’s Inequality

    Suppose we know r.v. Y and wish to guess the value of correlated r.v. X. Intuition says that is if H(X|Y) = H(X), knowing Y will not help. Conversely, if H(X|Y) = 0, then X can be estimated with no error. We now consider all the cases in between.

    Let . Observe Y, p(y|x). From Y calculate

    form a Markov chain (X hat is conditionally independent of X). Probability of error is defined.

INFORMATION THEORY


Information theory lecture notes powerpoint

  • (T15)Fano’s Inequality

    weaker inequality

    where |A| is the set size.

    Proof: Define error event E = 1

    0

    H(E,X|Y) = H(X|Y) + H(E|X,Y) (*)

    = 0

    chain rule no error if X is known.

INFORMATION THEORY


Information theory lecture notes powerpoint

  • Alternative Expansion

    conditioning

    reduces entropy

    (**)

    Given E=1 , then H(X|Y,E = 1) is bound by the number of

    remaining outcomes log (|A|-1) (T8 on slide 50)

    From (*) and (**) we get Fano’s inequality

INFORMATION THEORY


  • Login