1 / 32

Quantum Channels and their capacities

Quantum Channels and their capacities. Graeme Smith, IBM Research 11 th Canadian Summer School on Quantum Information. TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A. Information Theory.

Download Presentation

Quantum Channels and their capacities

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Quantum Channels and their capacities Graeme Smith, IBM Research 11th Canadian Summer School on Quantum Information TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAAAAAA

  2. Information Theory • “A Mathematical Theory of Communication”, C.E. Shannon, 1948 • Lies at the intersection of Electrical Engineering, Mathematics, and Computer Science • Concerns the reliable and efficient storage and transmission of information.

  3. Information Theory: Some Hits Low density parity check codes • Source Coding: Cell Phones Lempel-Ziv compression (gunzip, winzip, etc) Voyager (Reed Solomon codes)

  4. Quantum Information Theory When we include quantum mechanics (which was there all along!) things get much more interesting! Secure communication, entanglement enhanced communication, sending quantum information,… Capacity, error correction, compression, entropy..

  5. Outline • Lecture 1: Classical Information Theory • Lecture 2: Quantum Channels and their many capacities • Lecture 3: Advanced Topics--- Additivity, Partial Transpose, LOCC,Gaussian Noise,etc • Lecture 4: Advanced Topics---Additivity, Partial Transpose, LOCC,Gaussian Noise,etc.

  6. Example: Flipping a biased coin Let’s say we flip n coins. They’re independent and identically distributed (i.i.d): Pr( Xi = 0 ) = 1-p Pr( Xi = 1 ) = p Pr( Xi = xi, Xj = xj ) = Pr( Xi = xi ) Pr( Xj = xj ) x1x2 … xn Q: How many 1’s am I likely to get?

  7. Example: Flipping a biased coin Let’s say we flip n coins. They’re independent and identically distributed (i.i.d): Pr( Xi = 0 ) = 1-p Pr( Xi = 1 ) = p Pr( Xi = xi, Xj = xj ) = Pr( Xi = xi ) Pr( Xj = xj ) x1x2 … xn Q: How many 1’s am I likely to get? A: Around pn and, with very high probability between (p-d)n and (p+d)n

  8. Shannon Entropy Flip n i.i.d. coins, Pr( Xi = 0) = p, Pr( Xi = 1) = 1-p Outcome: x1…xn. w.h.p. get ¼ pn 1’s, but how many different configurations? There are such strings. Using we get Where H(p) = -plogp – (1-p)log(1-p) So, now, if I want to transmit x_1…x_n, I can just check which typical sequence, and report that! Maps n bits to nH(p) Similar for larger alphabet:

  9. Shannon Entropy Flip n i.i.d. coins, Pr( Xi = xi) = p(xi) x1…xn has prob P(x1…xn) = p(x1)…p(xn) Typically, So, typical sequences have More or less uniform dist over typical sequences.

  10. N X Y Correlations and Mutual Information H(X) – bits of information transmitted by letter from ensemble X Noisy Channel: p(y|x) (X,Y) correlated. How much does Y tell us about X? After I get Y, how much more do you need to tell me so that I know X? Given y, update expectations: Only bits per letter needed. Can calculate Savings: H(X)–H(X|Y) = H(X) + H(Y) – H(X,Y) =: I(X;Y)

  11. N Decoder Encoder N m0¼m m . . . N Channel Capacity Given n uses of a channel, encode a message m 2 {1,…,M} to a codeword xn = (x1(m),…, xn(m)) At the output of the channel, use yn = (y1,…,yn) to make a guess, m0. The rate of the code is (1/n)log M. The capacity of the channel, C(N), is defined as the maximum rate you can get with vanishing error probability as n !1

  12. N X Y Binary Symmetric Channel p(0|0) = 1-p p(1|0) = p p(0|1) = p p(1|1) = 1-p

  13. Capacity of Binary Symmetric Channel 2n possible outputs Input string xn =(x1,…, xn)

  14. Capacity of Binary Symmetric Channel 2n possible outputs 2nH(p) typical errors Input string xn =(x1,…, xn)

  15. Capacity of Binary Symmetric Channel 2n possible outputs 2nH(p) typical errors Input string x1n =(x11,…, x1n) 2nH(p) x2n =(x21,…, x2n)

  16. Capacity of Binary Symmetric Channel 2n possible outputs 2nH(p) typical errors Input string x1n =(x11,…, x1n) 2nH(p) x2n =(x21,…, x2n) . . . 2nH(p) xMn =(xM1,…, xMn)

  17. Capacity of Binary Symmetric Channel 2n possible outputs 2nH(p) typical errors Input string Each xmn gets mapped to 2nH(p) different outputs. If these sets overlap for different inputs, the decoder will be confused. So, we need M 2nH(p)· 2n, which implies (1/n)log M · 1-H(p) x1n =(x11,…, x1n) 2nH(p) x2n =(x21,…, x2n) . . . 2nH(p) xMn =(xM1,…, xMn) Upper bound on capacity

  18. Direct Coding Theorem:Achievability of 1-H(p) • Choose 2nR codewords randomly according to Xn (50/50 variable) • xmn! yn. To decode, look at all strings within 2n(H(p)+ d) bit-flips of yn. If this set contains exactly one codeword, decode to that. Otherwise, report error. Decoding sphere is big enough that w.h.p. the correct codeword xmn is in there. So, the only source of error is if two codewords are in there. What are the chances of that???

  19. Direct coding theorem: Achievablility of 1-H(p) Mapped here Random codeword 2n total

  20. Direct coding theorem: Achievablility of 1-H(p) Size 2n(H(p) + d) 2n total

  21. Direct coding theorem: Achievablility of 1-H(p) Size 2n(H(p) + d) If code is chosen randomly, what’s the chance of another codeword in this ball? 2n total

  22. Direct coding theorem: Achievablility of 1-H(p) Size 2n(H(p) + d) If code is chosen randomly, what’s the chance of another codeword in this ball? If I choose one more word, the chance is 2n total

  23. Direct coding theorem: Achievablility of 1-H(p) Size 2n(H(p) + d) If code is chosen randomly, what’s the chance of another codeword in this ball? If I choose one more word, the chance is Choose 2nR more, the chance is 2n total If R < 1 – H(p) – d, this ! 0 as n !1

  24. Direct coding theorem: Achievablility of 1-H(p) So, the average probability of decoding error (averaged over codebook choice and codeword) is small. As a result, there must be some codebook with low prob of error (averaged over codewords). Size 2n(H(p) + d) If code is chosen randomly, what’s the chance of another codeword in this ball? If I choose one more word, the chance is Choose 2nR more, the chance is 2n total If R < 1 – H(p) – d, this ! 0 as n !1

  25. Low worst-case probability of error Showed: there’s a code with rate R such that Let N2e = #{i | Pi > 2e}. Then, So that N2e < 2nR - 1. Throw these away. Gives a code with rate R – 1/n and Pi < 2e for all i.

  26. N X Y Coding theorem: general case • 2nH(X) typical xn • For typical yn, ¼ 2n(H(X|Y) + d) candidate xn’s in decoding sphere. If unique candidate, report this. • Each sphere of size 2n(H(X|Y)+d) contains a fraction of the inputs. • If we choose 2nR codewords, prob of accidentally falling into wrong sphere is • Okay as long as R < I(X;Y)

  27. Capacity for general channel • We showed that for any input distribution p(x), given p(y|x), we can approach rate R = I(X;Y). By picking the best X, we can achieve C(N) = maxXI(X;Y). This is called the “direct” part of the capacity theorem. • In fact, you can’t do any better. Proving there’s no way to beat maxXI(X;Y) is called the “converse”.

  28. Converse For homework, you’ll prove two great results about entropy: • H(X,Y) · H(X)+ H(Y) • H(Y1Y2|X1X2) · H(Y1|X1) + H(Y2|X2) We’re going to use the first one to prove that no good code can transmit at a rate better than maxX I(X;Y)

  29. Converse • Choose some code with 2nR strings of n letters. • Let Xn be uniform distribution on the codewords (codeword i with prob 1/2nR) • Because the channels are independent, we get p(y1…yn|x1…xn) = p(y1|x1)…p(yn|xn) • As a result, the conditional entropy satisfies H(Yn|Xn) = h –log p(y1…yn|x1…xn) i = ih –log p(yi|xi)i = i H(Yi|Xi). • Now, I(Yn;Xn) = H(Yn) – H(Yn|Xn) ·i H(Yi) - H(Yi|Xi) = i I(Yi;Xi) · n maxX I(X;Y) • Furthermore, I(Yn;Xn) = H(Xn)-H(Xn|Yn) = nR - H(Xn|Yn)· n maxX I(X;Y). • Finally, since Yn must be sufficient to decode Xn, we must have (1/n)H(Yn|Xn) ! 0, which means R · maxX I(X;Y).

  30. Recap • Information theory is concerned with efficient and reliable transmission and storage of data. • Focused on the problem of communication in the presence of noise. Requires error correcting codes. • Capacity is the maximum rate of communication for a noisy channel. Given by maxX I(X;Y). • Two steps to capacity proof: (1) direct part: show by randomized argument that there exist codes with low probability of error. Hard part: error estimates. (2) Converse: there are no codes that do better---use entropy inequalities.

  31. Coming up: Quantum channels and their various capacities.

  32. Homework is due at the Beginning of next class. • Fact II is called Jensen’s inequality • No partial credit

More Related