Loading in 5 sec....

Computational Analogues of EntropyPowerPoint Presentation

Computational Analogues of Entropy

- 117 Views
- Uploaded on

Download Presentation
## PowerPoint Slideshow about ' Computational Analogues of Entropy' - eunice

**An Image/Link below is provided (as is) to download presentation**

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

### Computational Analogues of Entropy

Boaz BarakRonen ShaltielAvi Wigderson

1. Investigate possible defs forcomputationalMin-Entropy.

2. Check whether computational defs satisfy analogs of statistical properties.

Statistical Min-EntropyDefinition:H(X)¸k iff maxx Pr[ X=x ]<2-k

( X r.v. over {0,1}n )

Properties:

- H(X) · Shannon-Ent(X)

- H(X)=n iff X~Un

- H(X,Y) ¸ H(X) (concatenation)

- If H(X)¸k then 9(efficient)fs.t.f(X)~Uk/2(extraction)

Our Contributions

- Study 3 variants (1 new) of pseudoentropy.
- Equivalence & separation results for several computational model.
- Study analogues of IT results.

In this talk:

- Present the 3 variants.
- Show 2 results + proof sketches

Review - Pseudorandomness

Def:X is pseudorandom if

maxD2C biasD(X,Un) <

i.e., X is computationally indistinguishable from Un

C– class of efficient algorithms (e.g. s-sized circuits)

biasD(X,Y) =| EX[D(X)] - EY[D(Y)] |

– parameter (in this talk: some constant > 0)

i.e., X is computationally indist. from someY with ¸k statistical min-entropy.

i.e., 8 efficient D, X is computationally indist. by D from someY=Y(D) with ¸k statistical min-entropy.

Defining PseudoentropyDef 1 [HILL]: HHILL(X)¸k if

9Y s.t. H(Y)¸ k and maxD2C biasD(X,Y) <

minH(Y)¸ K maxD2C biasD(X,Y) <

Def 2: HMet(X)¸k if

maxD2C minH(Y)¸ K biasD(X,Y) <

Def 3 [Yao]: HYao(X)¸k if X cannot be efficiently compressed to k-1 bits.

maxD2C biasD(X,Un) <

*X is pseudorandom if

Defining Pseudoentropy

HHILL(X)¸k if minH(Y)¸ K maxD2C biasD(X,Y) <

HMet(X)¸k if maxD2C minH(Y)¸ K biasD(X,Y) <

HYao(X)¸k if X can’t be efficiently compressed tok-1bits.

Claim 1: H(X) · HHILL(X) · HMet(X) · HYao(X)

Claim 2: Fork=n all 3 defs equivalent to pseudorandomness.

Claim 3: All 3 defs satisfy extraction property.[Tre]

Use the “Min-Max” theorem. [vN28]

HILL & Metric Def are Equivalent(For C = poly-sized circuits, any )

Thm 1: HHILL(X) = HMet(X)

Proof: SupposeHHILL(X)<k

Player 2: D

Player 1:D

Y

Y

Player 1:

biasD(X,Y)¸

HHILL(X)¸k if minH(Y)¸K maxD2C biasD(X,Y) <

HMet(X)¸k if maxD2C minH(Y)¸K biasD(X,Y) <

Unpredictability & Entropy

Thm [Yao]: If X is unpredicatble with adv. then X is pseudorandom w/ param ’=n¢

Loss of factor of n due to hybrid argument –useless for constant advantage

This loss can be crucial for some applications (e.g., extractors, derandomizing small-space algs)

Unpredictability & Entropy

IT Fact [TZS]: If X is IT-unpredictable with const. adv. then H(X)=(n)

We obtain the following imperfect analog:

Thm 2: If X is unpredictable by SAT-gate circuits with const. adv. then HMet(X)=(n)

In paper: A variant of Thm 2 for nonuniform online logspace.

D

X

Thm 2: If X is unpredictable by SAT-gate circuits with const. adv. then HMet(X)=(n)

Proof: Suppose thatHMet(X)<n We’ll construct a SAT-gate predictor P s.t.

Pri,X[ P(X1,…,Xi-1)=Xi] = 1 –

We have that maxD2CminH(Y)¸n biasD(X,Y)¸

i.e., 9D s.t.8Y If H(Y)¸n then biasD(X,Y)¸

Assume: 1) |D-1(1)| < 2n*2) PrX[ D(X)=1 ] = 1

D

X

Construct P from D1) |D-1(1)| < 2n2) PrX[ D(X)=1 ] = 1

Define predictor P as follows:P(x1,…,xi)=0 iff Pr[ D(x1,…,xi,0,Un-i-1)=1] > ½

Note that P does not depend on X and can be constructed w/ NP oracle. (approx counting [JVV])

Claim: 8x2D, Ppredicts at least (1-)n indices ofx

Claim: 8x2D, Ppredicts at least(1-)n indices of x

Proof: SupposePfails to predictx in m indices.

¸2m

¸8

We’ll show that |D|>2m,obtaining a contradiction.

¸4

¸4

¸2

¸2

1

P(x1,…,xi)=0 iff Pr[ D(x1,…,xi,0,Un-i-1)=1] > ½

Open Problems

More results for poly-time computation:

- Analog of Thm 2 (unpredictabilityentropy)?
- Meaningful concatenation property?
- Separate Yao & Metric pseudoentropy.

Prove that RL=L

Download Presentation

Connecting to Server..