One time computable self erasing functions
Download
1 / 61

One-Time Computable Self -Erasing Functions - PowerPoint PPT Presentation


  • 94 Views
  • Uploaded on

One-Time Computable Self -Erasing Functions. Stefan Dziembowski Tomasz Kazana Daniel Wichs. (accepted to TCC 2011) . Main contribution of this work. We introduce a new model for leakage/tamper resistance . In our model the adversary is space-bounded .

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' One-Time Computable Self -Erasing Functions' - shirin


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
One time computable self erasing functions

One-Time ComputableSelf-Erasing Functions

Stefan Dziembowski

Tomasz Kazana

Daniel Wichs

(accepted to TCC 2011)


Main contribution of this work
Main contribution of this work

We introduce a new model for leakage/tamper resistance.

In our model the adversary is space-bounded.

We present some primitives that are secure in this model.

Applications: password-protected storage, proofs-of-erasure.


How to construct secure digital systems
How to construct secure digital systems?

MACHINE

(PC, smartcard, etc.)

very secure

Security based on well-defined mathematical problems.

implementation

CRYPTO

not secure!


The problem
The problem

MACHINE

(PC, smartcard, etc.)

easy to attack

implementation

hard to attack

CRYPTO


M achines cannot be trusted
Machines cannot be trusted!

1. Informationleakage

MACHINE

(PC, smartcard, etc.)

2. Maliciousmodifications


Relevant scenarios
Relevant scenarios

MACHINES

. . .

PCs

specialized hardware

  • malicious software:

  • viruses,

  • trojan horses.

  • side-channel attacks:

  • power consumption,

  • electromagnetic leaks,

  • timing information.


A recent trend in cryptography
A recent trend in cryptography

Construct protocols that are secure even if they are implemented on machines that are not fully trusted.

[ISW03, GLMMR04, MR04, IPW06, CLW06, Dzi06, DP07, DP08, AGV09, ADW09, Pie09, NS09, SMY09, KV09, FKPR10, DDV10, DPW10, DHLW10, BKKV10, BG10,…]


Main idea of this line of research
Main idea of this line of research

To achieve security one assumes that the power of the adversary during the “physical attack” is

“limited in some way”.

this should be justified by some physical characteristics of the device


Examples of assumptions 1 3
Examples of assumptions (1/3)

the adversary can learn the values on up to t wires

length-shrinking

h(S)

booleancircuit

S

“Probing Attacks” [ISW03]

Bounded-Retrieval Model

“Memory Attacks” [AGV09]


Examples of assumptions 2 3
Examples of assumptions (2/3)

h(S0)

h(S1)

h(S)

length-shrinking

low-complexity h

length-shrinking

h

length-shrinking

h

S

S0

S1

[FRTV10,DDV10]

[MR04,DP08…]


Examples of assumptions 3 3
Examples of assumptions (3/3)

the adversary can modify up to t wires

booleancircuit

[IPSW03]


One way to look at these efforts
One way to look at these efforts:

The trust assumptions on hardware can never be removed completely.

But we can try to reduce them.


General goal
General goal

Come up with attack models that are:

  • realistic (i.e. they correspond to the real-life adversaries),

  • allow to construct secure schemes

tradeoff

Problem: current models are not strong enough.

Example: BRM --- the adversary is assumed to be passive.


Outline
Outline

  • Introduction and motivation

  • Our model

  • One-time computable functions

  • Proofs of erasure

  • Subsequent work and open problems


Our model
Our model

We work in the “virus model”

(but our techniques may also be used to protect against the side-channel attacks)

We assume that the adversary is active:

small

(the “virus”)

big

interacts

modifies the internal data


The model
The model

device

small

big

send/receive

read / write

memory


What are the restrictions on interaction
What are the restrictions on interaction?

small

(the “virus”)

big

send/receive

There is a limit ton the number of bits that the virus can send out.

(this is essentially the assumption used before in the BRM).


What are the restrictions on malicious modifications
What are the restrictions on malicious modifications?

The virus can modify the contents of the memory arbitrarily.

The only restriction is that he is space-bounded.

small

(the “virus”)

read / write

memory


Outline1
Outline

  • Introduction and motivation

  • Our model

  • One-time computable functions

  • Proofs of erasure

  • Subsequent work and open problems


Our contribution
Our contribution

In this model we construct a primitive:

one-time-computable pseudorandom functions

f : keys × messages ciphertexts

message M

key R

ciphertextC=f(R,M)

Informally: “it should be possible to evaluate f(R,M) at most once”.

Normally |R| >> |M| (= |C|)


the “ideal functionality”:

message M

ciphertextC=f(R,M)

key R

message M’

error

  • In our model:

send/receive

can only learn one value of f(R,K)

read / write

key R


Some more details
Some more details

A1: extra space for the adversary

A0: for the honest scheme

key R

memory

  • Main idea: design fsuch that:

  • the computation of f(R,M)twice takes more space than

  • |A0| + |A1| + |R|.

  • but it can be done efficiently once in space

  • |A0| + |R|.

  • hence we can compute f(R,M) exactly once (during this computation we will overwrite R).


Observation
Observation

If |A1| ≥ |R| then: the adversary can copy M into A1:

copy

A1: extra space for the adversary

A0: for the honest scheme

key R

Then, he can simply run the honest scheme on the “copy of R”.

He can obviously do it multiple times.

  • Moral

  • A1 has to be shorter than R.


A simplifying assumption
A simplifying assumption

In our schemes A0will be very short.

Therefore we can forget about (include it into the space for the adversary).

So, the memory looks now like this :

A: space for the adversary

key R


An application of this primitive
An application of this primitive

Password-protected storage:

f – a one-time computablePRF

(Enc, Dec) – a standard symmetric encryption scheme

To encrypt a message M with a password π:

  • select R at random

  • calculate Z= f(R,π)

  • store (R, Enc(Z,M))

Note: this will overwrite R

To decrypt compute:

Z =f(R, π)

and then

Dec(Z,M)

M

key R

C=Enc(Z,M)

π


Problem
Problem

Chas to be shorter than R.

(since the adversary can use part of the space where C is stored as his memory).

A solution: store Con a read-only memory.


Another problem
Another problem

If an honest user makes a typo then he will not have another try.

We have a solution for this – stay tuned.


Yet another problem
Yet another problem

select Rat random

calculate Z= f(R, π)

store(R, Enc(Z, M))

Look again at this procedure:

Can it be done “locally” on this machine?

  • Looks problematic, since the calculation of Z will destroy R.

select a short seed Sat random and store it

set R := PRG(S)

calculate Z= f(R, π) (destroying R)

recalculate R := PRG(S)

erase S

store(R, Enc(Z, M))

  • Solution:


Can we prove anything in our model
Can we prove anything in our model?

It seems like we do not have the right tools in complexity theory to prove anything in a plain model.

Our solution: use the random oracle model

small

big

oracle

with a hash function H


Using rom in this context is delicate
Using ROM in this context is delicate

Example:

H – hash function, M – long message

In ROM computing H(M) requires the adversary to store entire M first.

In real-life --- not necessarily:

If H is constructed using Merkle-Damgardthen H(M) can be computed “on fly”:

H(M)

M


Our solution
Our solution

Assume that Random Oracle works only on messages of small length:

H: {0,1}cw -> {0,1}w

(for a small c)

Typically:

c = 2

In this case H is just the compression function.

this will be our main building-block

H(m||m’)

m

m’


Our functions will always correspond to a graph
Our functions will always correspond to a graph

f:

output

H(m||m’)

m

m’

input


Our prf is based on a pyramid graph
Our PRF is based on a pyramid graph

output (the ciphertext):

the key:

R =

R1

R2

R3

R4

R5

the message:

M

the hash function:

H(m||m’)

R5

R1

R2

R4

R1

R2

R3

R4

R5

R3

m

m’

M


Our theorem informally
Our theorem (informally)

send/receive

t

bits

A: for adv.

key R

f

read / write

If |A|+ t < |R|- εthen the adversary will never learn

f(R,M)and f(R,M’)

for M ≠ M’

M

R

memory


So how to prove the security
So, how to prove the security?

We use a technique called

graph pebbling

there is a vast literature on this. See e.g.:

John E. Savage. Models of Computation: Exploring the Power of Computing. 1997.

We use techniques introduced in:

Dwork, Naor and Wee Pebbling and Proofs of Work, CRYPTO 2005.


Graph pebbling
Graph pebbling

“output vertices”

a DAG:

“input vertices”

Intuition: there is a pebble on a vertex v if the corresponding block is in the memory.

In the initial configuration there is a pebble on every input vertex.


The rules of moving the pebbles
The rules of moving the pebbles

  • there are up to B pebbles

  • if all the children of vcarry a pebble, we can put a pebble onv

  • a pebble can be removed from every vertex

  • Goal: pebble every output vertex.


Fact dwork naor wee 1995
Fact[Dwork-Naor-Wee 1995]

f:

R1

R2

R3-

read / write

w– length of the block

the graph corresponding to fcannot be pebbled with Tpebbles

R1

R2

R3-

memory

-

fcannot cannot be computed in memory ≈ wT

implies


But our model is more complicated
But our model is more complicated…

send/receive

read / write

The adversary can also send data to an external adversary that is notspace-bounded.

Our solution: we introduce special red-pebbles.

The “old” pebbles will be called black.

memory


New rules
New rules

  • If there is a black pebble on v then we can put a red pebble on it.

  • if all the children ofvcarry (red or black) pebble, we can put a black pebble onv.

  • if all the children of vcarry ared pebble then we can put a red pebble on it.

  • a black pebble can be removed from every vertex (there is no need to remove the red pebbles)

Definition: a vertex va “heavy pebble” if it is a black pebble, or a red pebble generated by Rule 1.

  • Goal: put a black or red pebble on every output vertex.


The new restriction
The new restriction

We require that at any point of the game the number of the heavy pebbles is at most U(where Uis some parameter).

Intuition:

The only things that costs are:

  • the black pebbles ≈ “memory”

  • transforming a black pebble into a red one ≈ “communication”


Fact that we prove
Fact that we prove

w– length of the block

  • fcannot cannot be computed if the sum of

  • the memory size and

  • the number of sent bits

  • is ≈ wU

the graph corresponding to fcannot be pebbled withU heavy pebbles

implies


Now recall what we want to prove
Now, recall what we want to prove

f

send/receive

t

bits

A: for adv.

key R

read / write

M

R

If |A|+ t < |R|- εthen the adversary will never learn

f(R,M)and f(R,M’)

for M ≠ M’

memory

How does it translate into “pebbling”?


The pebbling problem
The pebbling problem

R1

R2

RK

M

M’

It is impossible to pebble both outputs with less than 2K-1 heavy pebbles.


A definition
A definition

We say that the output of the graph is input-dependent if after removing all the pebbles from the input it’s impossible to pebble the output:

impossible

possible


Lemma 1
Lemma 1

If the output is input-dependent then the number of heavy pebbles is at least K.

Proof by induction on K.

Base case K=2is trivial:


let y denote the number of heavy pebbles

transform the configuration by:

putting on the second row black pebbles that are reachable from the first row

removing the pebbles from the first row

Suppose the hypothesis holds for some K-1

We show it for K:

observations:

the “new” configuration is input-dependent

y ≤ x-1

suppose in this configuration:

the output is input-dependent

there are x heavy pebbles

From the induction hypothesis: y ≥ K-1

x≥ K

QED


Lemma 2
Lemma 2

In the first configuration that is input-independent there are at least K-1 heavy pebbles.

Proof

A configuration can become input-independent only because of moves of this type.

Therefore there new configuration has to “depend on the second row”.

So, it needs to have at least K-1 heavy pebbles.

QED


Now look again at the graph
Now, look again at the graph

there need to be at least K-1 heavy pebbles here

R1

R2

RK

M’

M

and at least K heavy pebbles in the rest of the graph.

Suppose the left graph becomes input independent first.

So, there are at least 2K-1 pebbles altogether.

QED


In the password protected storage can we allow more than one trial
In the “password-protected storage”:Can we allow more than one trial?

YES!

The construction gets a bit more complicated.

Main idea: the key gets destroyed “gradually”.

The maximal number of trials that we can tolerate is approximately equal to

where:

u – the bound on communication plus storage

m – the size of the secret key


Outline2
Outline

  • Introduction and motivation

  • Our model

  • One-time computable functions

  • Proofs of erasure

  • Subsequent work and open problems


Proof of erasure
Proof of erasure

[Perito and Tsudik, ESORICS 2010]

device

verifier

securelink

memory

Goal: the verifier wants to make sure that the device has erased its memory.


The scheme of perito and tsudik
The scheme of Perito and Tsudik

a random string

of length |M|

device

verifier

R

memory M

R

the device proves the knowledge of R


Our idea
Our idea

We construct a “hash function”

H: {0,1}a{0,1}a

(where a is small)

that

  • can be computed in memory of length |M|

  • but it cannot be computedin memory slightly smaller than M.


The improved protocol
The improved protocol

a random string

of length a

device

verifier

X

memory M

Y

compute

Y := H(X)

(this overwrites M)

check if

Y = H(X)

advantage: communication from the verifier to the device is much shorter.


How to construct such an h
How to construct such an H?

We use again the pyramid graph!

w – the length of the block

set K := |M|/w

let Kbe the number of the blocks.

Y=H(X)

Fact 1

this function can be computed in memory of length |M|

X


Fact 2
Fact 2

Using the “pebbling techniques” we can show that:

it is impossible to compute H in memory significantly smaller than |M|.


Outline3
Outline

  • Introduction and motivation

  • Our model

  • One-time computable functions

  • Proofs of erasure

  • Subsequent work and open problems


Subsequent work
Subsequent work

D., KazanaandWichs

Key-Evolution Schemes Resilient to Space-Bounded Leakage

(in submission)

We show key-evolution schemes secure in this model.


Research directions
Research directions

  • Find new applications.

  • Proofs without the standard model?



ad