Introduction to quantum information processing qic 710 cs 667 ph 767 co 681 am 871
This presentation is the property of its rightful owner.
Sponsored Links
1 / 17

Introduction to Quantum Information Processing QIC 710 / CS 667 / PH 767 / CO 681 / AM 871 PowerPoint PPT Presentation


  • 119 Views
  • Uploaded on
  • Presentation posted in: General

Introduction to Quantum Information Processing QIC 710 / CS 667 / PH 767 / CO 681 / AM 871. Lecture 16 (2011). Richard Cleve DC 2117 [email protected] Distinguishing between two arbitrary quantum states. Holevo-Helstrom Theorem (1).

Download Presentation

Introduction to Quantum Information Processing QIC 710 / CS 667 / PH 767 / CO 681 / AM 871

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Introduction to quantum information processing qic 710 cs 667 ph 767 co 681 am 871

Introduction to Quantum Information ProcessingQIC 710 / CS 667 / PH 767 / CO 681 / AM 871

Lecture 16 (2011)

Richard Cleve

DC 2117

[email protected]


Introduction to quantum information processing qic 710 cs 667 ph 767 co 681 am 871

Distinguishing between two arbitrary quantum states


Holevo helstrom theorem 1

Holevo-Helstrom Theorem (1)

Theorem: for any two quantum states  and , the optimal measurement procedure for distinguishing between them succeeds with probability ½ + ¼|| − ||tr (equal prior probs.)

Proof* (the attainability part):

Since  − is Hermitian, its eigenvalues are real

Let + be the projector onto the positive eigenspaces

Let − be the projector onto the non-positive eigenspaces

Take the POVM measurement specified by+and − with the associations +   and −  

* The other direction of the theorem (optimality) is omitted here


Holevo helstrom theorem 2

Holevo-Helstrom Theorem (2)

Claim: this succeeds with probability ½ + ¼|| −||tr

Proof of Claim:

A key observation is Tr(+−−)( −) = || −||tr

The success probability is ps=½Tr(+ ) + ½Tr(−)

& the failure probability is pf=½Tr(+ ) + ½Tr(−)

Therefore, ps−pf=½Tr(+−−)( −) =½|| −||tr

From this, the result follows


Purifications ulhmann s theorem

Purifications & Ulhmann’s Theorem

Any density matrix , can be obtained by tracing out part of some larger pure state:

a purification of 

Ulhmann’s Theorem*: Thefidelitybetween  andis the maximum of φψtaken over all purifications ψ and φ

* See [Nielsen & Chuang, pp. 410-411] for a proof of this

Recall our previous definition of fidelity as

F(, )= Tr√1/21/2||1/21/2||tr


Relationships between fidelity and trace distance

Relationships between fidelity and trace distance

1 − F(,)  ||−||tr  √1 − F(,)2

See [Nielsen & Chuang, pp. 415-416] for more details


Introduction to quantum information processing qic 710 cs 667 ph 767 co 681 am 871

Entropy and

compression


Shannon entropy

Shannon Entropy

Let p=(p1,…, pd) be a probability distribution on a set {1,…,d}

Then the (Shannon) entropy of p is H(p1,…, pd)

Intuitively, this turns out to be a good measure of “how random” the distribution p is:

vs.

vs.

vs.

H(p) = log d

H(p) = 0

Operationally, H(p) is the number of bits needed to store the outcome (in a sense that will be made formal shortly)


Von neumann entropy

Von Neumann Entropy

For a density matrix , it turns out that S() =− Tr log is a good quantum analog of entropy

Note:S() = H(p1,…, pd), where p1,…, pd are the eigenvalues of  (with multiplicity)

Operationally, S() is the number of qubits needed to store  (in a sense that will be made formal later on)

  • Both the classical and quantum compression results pertain to the case of large blocks of n independent instances of data:

  • probability distribution pn in the classical case, and

  • quantum state n in the quantum case


Classical compression 1

Classical compression (1)

Let p=(p1,…, pd) be a probability distribution on a set {1,…,d} where n independent instances are sampled:

( j1,…, jn) {1,…,d}n (dnpossibilities, nlogd bits to specify one)

Theorem*: for all > 0, for sufficiently large n, there is a scheme that compresses the specification to n(H(p) + )bits while introducing an error with probability at most 

Intuitively, there is a subset of {1,…,d}n, called the “typical sequences”, that has size 2n(H(p) +)and probability 1 −

A nice way to prove the theorem, is based on two cleverly defined random variables …

* “Plain vanilla” version that ignores, for example, the tradeoffs between nand


Classical compression 2

Classical compression (2)

Define the random variable f :{1,…,d} R as f ( j ) =− log pj

Note that

Thus

Define g:{1,…,d}n R as

Also, g( j1,…, jn)


Classical compression 3

Classical compression (3)

By standard results in statistics, as n , the observed value of g( j1,…, jn) approaches its expected value, H(p)

More formally, call ( j1,…, jn){1,…,d}n-typical if

Then, the result is that, for all > 0,for sufficiently large n,

Pr[( j1,…, jn) is-typical] 1−

  • We can also bound the number ofthese-typical sequences:

  • By definition, each such sequence has probability  2−n(H(p) +)

  • Therefore, there can be at most 2n(H(p) +) such sequences


Classical compression 4

Classical compression (4)

In summary, the compression procedure is as follows:

The input data is ( j1,…, jn) {1,…,d}n, each independently sampled according the probability distribution p=(p1,…, pd)

The compression procedure is to leave ( j1,…, jn) intact if it is -typical and otherwise change it to some fixed -typical sequence, say, ( j,…, j) (which will result in an error)

Since there are at most 2n(H(p) +)-typical sequences, the data can then be converted inton(H(p) + )bits

The error probability is at most , the probability of an atypical input arising


Quantum compression 1

Quantum compression (1)

The scenario:n independent instances of a d-dimensional state are randomly generated according some distribution:

φ1 prob.p1

  

φr  prob.pr

0 prob.½

+ prob.½

Example:

Goal: to “compress” this into as few qubits as possible so that the original state can be reconstructed with small error in the following sense …

  • -good:

  • No procedure can distinguish between these two states

    • compressing and then uncompressing the data

    • the original data left as is

  • with probability more than ½ + ¼ 


Quantum compression 2

Quantum compression (2)

Theorem: for all > 0, for sufficiently large n, there is a scheme that compresses the data to n(S() + ) qubits, that is 2√ -good

Express  in its eigenbasis as

Define

For the aforementioned example, 0.6n qubits suffices

The compression method:

With respect to this basis, we will define an -typical subspace of dimension 2n(S() +) =2n(H(q) +)


Quantum compression 3

Quantum compression (3)

1 prob.q1

  

d prob.qd

Define typ as the projector into the -typical subspace

By the same argument as in the classical case, the subspace has dimension 2n(S() +) andTr(typ n)  1−

The -typicalsubspace is that spanned by

where ( j1,…, jn) is -typical with respect to (q1,…, qd)

This is because  is the density matrix of


Quantum compression 4

Quantum compression (4)

Proof that the scheme is 2√ -good:

(This is to be inserted)


  • Login