Computational analogues of entropy
This presentation is the property of its rightful owner.
Sponsored Links
1 / 13

Computational Analogues of Entropy PowerPoint PPT Presentation


  • 86 Views
  • Uploaded on
  • Presentation posted in: General

Computational Analogues of Entropy. Boaz Barak Ronen Shaltiel Avi Wigderson. Our Objectives:. 1. Investigate possible defs for computational Min-Entropy. 2. Check whether computational defs satisfy analogs of statistical properties. Statistical Min-Entropy.

Download Presentation

Computational Analogues of Entropy

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Computational analogues of entropy

Computational Analogues of Entropy

Boaz BarakRonen ShaltielAvi Wigderson


Statistical min entropy

Our Objectives:

1. Investigate possible defs forcomputationalMin-Entropy.

2. Check whether computational defs satisfy analogs of statistical properties.

Statistical Min-Entropy

Definition:H(X)¸k iff maxx Pr[ X=x ]<2-k

( X r.v. over {0,1}n )

Properties:

  • H(X) · Shannon-Ent(X)

  • H(X)=n iff X~Un

  • H(X,Y) ¸ H(X) (concatenation)

  • If H(X)¸k then 9(efficient)fs.t.f(X)~Uk/2(extraction)


Our contributions

Our Contributions

  • Study 3 variants (1 new) of pseudoentropy.

  • Equivalence & separation results for several computational model.

  • Study analogues of IT results.

In this talk:

  • Present the 3 variants.

  • Show 2 results + proof sketches


Review pseudorandomness

Review - Pseudorandomness

Def:X is pseudorandom if

maxD2C biasD(X,Un) < 

i.e., X is computationally indistinguishable from Un

C– class of efficient algorithms (e.g. s-sized circuits)

biasD(X,Y) =| EX[D(X)] - EY[D(Y)] |

 – parameter (in this talk: some constant > 0)


Defining pseudoentropy

i.e., X is computationally indist. from someY with ¸k statistical min-entropy.

i.e., 8 efficient D, X is computationally indist. by D from someY=Y(D) with ¸k statistical min-entropy.

Defining Pseudoentropy

Def 1 [HILL]: HHILL(X)¸k if

9Y s.t. H(Y)¸ k and maxD2C biasD(X,Y) < 

minH(Y)¸ K maxD2C biasD(X,Y) < 

Def 2: HMet(X)¸k if

maxD2C minH(Y)¸ K biasD(X,Y) < 

Def 3 [Yao]: HYao(X)¸k if X cannot be efficiently compressed to k-1 bits.

maxD2C biasD(X,Un) < 

*X is pseudorandom if


Defining pseudoentropy1

Defining Pseudoentropy

HHILL(X)¸k if minH(Y)¸ K maxD2C biasD(X,Y) < 

HMet(X)¸k if maxD2C minH(Y)¸ K biasD(X,Y) < 

HYao(X)¸k if X can’t be efficiently compressed tok-1bits.

Claim 1: H(X) · HHILL(X) · HMet(X) · HYao(X)

Claim 2: Fork=n all 3 defs equivalent to pseudorandomness.

Claim 3: All 3 defs satisfy extraction property.[Tre]


Hill metric def are equivalent

2:

Use the “Min-Max” theorem. [vN28]

HILL & Metric Def are Equivalent

(For C = poly-sized circuits, any )

Thm 1: HHILL(X) = HMet(X)

Proof: SupposeHHILL(X)<k

Player 2: D

Player 1:D

Y

Y

Player 1:

biasD(X,Y)¸

HHILL(X)¸k if minH(Y)¸K maxD2C biasD(X,Y) < 

HMet(X)¸k if maxD2C minH(Y)¸K biasD(X,Y) < 


Unpredictability entropy

Can we do better?

Unpredictability & Entropy

Thm [Yao]: If X is unpredicatble with adv.  then X is pseudorandom w/ param ’=n¢

Loss of factor of n due to hybrid argument –useless for constant advantage 

This loss can be crucial for some applications (e.g., extractors, derandomizing small-space algs)


Unpredictability entropy1

Unpredictability & Entropy

IT Fact [TZS]: If X is IT-unpredictable with const. adv. then H(X)=(n)

We obtain the following imperfect analog:

Thm 2: If X is unpredictable by SAT-gate circuits with const. adv. then HMet(X)=(n)

In paper: A variant of Thm 2 for nonuniform online logspace.


Computational analogues of entropy

{0,1}n

D

X

Thm 2: If X is unpredictable by SAT-gate circuits with const. adv. then HMet(X)=(n)

Proof: Suppose thatHMet(X)<n We’ll construct a SAT-gate predictor P s.t.

Pri,X[ P(X1,…,Xi-1)=Xi] = 1 – 

We have that maxD2CminH(Y)¸n biasD(X,Y)¸

i.e., 9D s.t.8Y If H(Y)¸n then biasD(X,Y)¸

Assume: 1) |D-1(1)| < 2n*2) PrX[ D(X)=1 ] = 1


Construct p from d

{0,1}n

D

X

Construct P from D

1) |D-1(1)| < 2n2) PrX[ D(X)=1 ] = 1

Define predictor P as follows:P(x1,…,xi)=0 iff Pr[ D(x1,…,xi,0,Un-i-1)=1] > ½

Note that P does not depend on X and can be constructed w/ NP oracle. (approx counting [JVV])

Claim: 8x2D, Ppredicts at least (1-)n indices ofx


Computational analogues of entropy

Claim: 8x2D, Ppredicts at least(1-)n indices of x

Proof: SupposePfails to predictx in m indices.

¸2m

¸8

We’ll show that |D|>2m,obtaining a contradiction.

¸4

¸4

¸2

¸2

1

P(x1,…,xi)=0 iff Pr[ D(x1,…,xi,0,Un-i-1)=1] > ½


Open problems

Open Problems

More results for poly-time computation:

  • Analog of Thm 2 (unpredictabilityentropy)?

  • Meaningful concatenation property?

  • Separate Yao & Metric pseudoentropy.

Prove that RL=L


  • Login