OneTime Computable Self Erasing Functions. Stefan Dziembowski Tomasz Kazana Daniel Wichs. (accepted to TCC 2011) . Main contribution of this work. We introduce a new model for leakage/tamper resistance . In our model the adversary is spacebounded .
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
OneTime ComputableSelfErasing Functions
Stefan Dziembowski
Tomasz Kazana
Daniel Wichs
(accepted to TCC 2011)
We introduce a new model for leakage/tamper resistance.
In our model the adversary is spacebounded.
We present some primitives that are secure in this model.
Applications: passwordprotected storage, proofsoferasure.
MACHINE
(PC, smartcard, etc.)
very secure
Security based on welldefined mathematical problems.
implementation
CRYPTO
not secure!
MACHINE
(PC, smartcard, etc.)
easy to attack
implementation
hard to attack
CRYPTO
1. Informationleakage
MACHINE
(PC, smartcard, etc.)
2. Maliciousmodifications
MACHINES
. . .
PCs
specialized hardware
Construct protocols that are secure even if they are implemented on machines that are not fully trusted.
[ISW03, GLMMR04, MR04, IPW06, CLW06, Dzi06, DP07, DP08, AGV09, ADW09, Pie09, NS09, SMY09, KV09, FKPR10, DDV10, DPW10, DHLW10, BKKV10, BG10,…]
To achieve security one assumes that the power of the adversary during the “physical attack” is
“limited in some way”.
this should be justified by some physical characteristics of the device
the adversary can learn the values on up to t wires
lengthshrinking
h(S)
booleancircuit
S
“Probing Attacks” [ISW03]
BoundedRetrieval Model
“Memory Attacks” [AGV09]
h(S0)
h(S1)
h(S)
lengthshrinking
lowcomplexity h
lengthshrinking
h
lengthshrinking
h
S
S0
S1
[FRTV10,DDV10]
[MR04,DP08…]
the adversary can modify up to t wires
booleancircuit
[IPSW03]
The trust assumptions on hardware can never be removed completely.
But we can try to reduce them.
Come up with attack models that are:
tradeoff
Problem: current models are not strong enough.
Example: BRM  the adversary is assumed to be passive.
We work in the “virus model”
(but our techniques may also be used to protect against the sidechannel attacks)
We assume that the adversary is active:
small
(the “virus”)
big
interacts
modifies the internal data
device
small
big
send/receive
read / write
memory
small
(the “virus”)
big
send/receive
There is a limit ton the number of bits that the virus can send out.
(this is essentially the assumption used before in the BRM).
The virus can modify the contents of the memory arbitrarily.
The only restriction is that he is spacebounded.
small
(the “virus”)
read / write
memory
In this model we construct a primitive:
onetimecomputable pseudorandom functions
f : keys × messages ciphertexts
message M
key R
ciphertextC=f(R,M)
Informally: “it should be possible to evaluate f(R,M) at most once”.
Normally R >> M (= C)
the “ideal functionality”:
message M
ciphertextC=f(R,M)
key R
message M’
error
send/receive
can only learn one value of f(R,K)
read / write
key R
A1: extra space for the adversary
A0: for the honest scheme
key R
memory
If A1 ≥ R then: the adversary can copy M into A1:
copy
A1: extra space for the adversary
A0: for the honest scheme
key R
Then, he can simply run the honest scheme on the “copy of R”.
He can obviously do it multiple times.
In our schemes A0will be very short.
Therefore we can forget about (include it into the space for the adversary).
So, the memory looks now like this :
A: space for the adversary
key R
Passwordprotected storage:
f – a onetime computablePRF
(Enc, Dec) – a standard symmetric encryption scheme
To encrypt a message M with a password π:
Note: this will overwrite R
To decrypt compute:
Z =f(R, π)
and then
Dec(Z,M)
M
key R
C=Enc(Z,M)
π
Chas to be shorter than R.
(since the adversary can use part of the space where C is stored as his memory).
A solution: store Con a readonly memory.
If an honest user makes a typo then he will not have another try.
We have a solution for this – stay tuned.
select Rat random
calculate Z= f(R, π)
store(R, Enc(Z, M))
Look again at this procedure:
Can it be done “locally” on this machine?
select a short seed Sat random and store it
set R := PRG(S)
calculate Z= f(R, π) (destroying R)
recalculate R := PRG(S)
erase S
store(R, Enc(Z, M))
It seems like we do not have the right tools in complexity theory to prove anything in a plain model.
Our solution: use the random oracle model
small
big
oracle
with a hash function H
Example:
H – hash function, M – long message
In ROM computing H(M) requires the adversary to store entire M first.
In reallife  not necessarily:
If H is constructed using MerkleDamgardthen H(M) can be computed “on fly”:
H(M)
M
Assume that Random Oracle works only on messages of small length:
H: {0,1}cw > {0,1}w
(for a small c)
Typically:
c = 2
In this case H is just the compression function.
this will be our main buildingblock
H(mm’)
m
m’
f:
output
H(mm’)
m
m’
input
output (the ciphertext):
the key:
R =
R1
R2
R3
R4
R5
the message:
M
the hash function:
H(mm’)
R5
R1
R2
R4
R1
R2
R3
R4
R5
R3
m
m’
M
send/receive
t
bits
A: for adv.
key R
f
read / write
If A+ t < R εthen the adversary will never learn
f(R,M)and f(R,M’)
for M ≠ M’
M
R
memory
We use a technique called
graph pebbling
there is a vast literature on this. See e.g.:
John E. Savage. Models of Computation: Exploring the Power of Computing. 1997.
We use techniques introduced in:
Dwork, Naor and Wee Pebbling and Proofs of Work, CRYPTO 2005.
“output vertices”
a DAG:
“input vertices”
Intuition: there is a pebble on a vertex v if the corresponding block is in the memory.
In the initial configuration there is a pebble on every input vertex.
f:
R1
R2
R3
read / write
w– length of the block
the graph corresponding to fcannot be pebbled with Tpebbles
R1
R2
R3
memory

fcannot cannot be computed in memory ≈ wT
implies
send/receive
read / write
The adversary can also send data to an external adversary that is notspacebounded.
Our solution: we introduce special redpebbles.
The “old” pebbles will be called black.
memory
Definition: a vertex va “heavy pebble” if it is a black pebble, or a red pebble generated by Rule 1.
We require that at any point of the game the number of the heavy pebbles is at most U(where Uis some parameter).
Intuition:
The only things that costs are:
w– length of the block
the graph corresponding to fcannot be pebbled withU heavy pebbles
implies
f
send/receive
t
bits
A: for adv.
key R
read / write
M
R
If A+ t < R εthen the adversary will never learn
f(R,M)and f(R,M’)
for M ≠ M’
memory
How does it translate into “pebbling”?
R1
R2
RK
M
M’
It is impossible to pebble both outputs with less than 2K1 heavy pebbles.
We say that the output of the graph is inputdependent if after removing all the pebbles from the input it’s impossible to pebble the output:
impossible
possible
If the output is inputdependent then the number of heavy pebbles is at least K.
Proof by induction on K.
Base case K=2is trivial:
let y denote the number of heavy pebbles
transform the configuration by:
putting on the second row black pebbles that are reachable from the first row
removing the pebbles from the first row
Suppose the hypothesis holds for some K1
We show it for K:
observations:
the “new” configuration is inputdependent
y ≤ x1
suppose in this configuration:
the output is inputdependent
there are x heavy pebbles
From the induction hypothesis: y ≥ K1
x≥ K
QED
In the first configuration that is inputindependent there are at least K1 heavy pebbles.
Proof
A configuration can become inputindependent only because of moves of this type.
Therefore there new configuration has to “depend on the second row”.
So, it needs to have at least K1 heavy pebbles.
QED
there need to be at least K1 heavy pebbles here
R1
R2
RK
M’
M
and at least K heavy pebbles in the rest of the graph.
Suppose the left graph becomes input independent first.
So, there are at least 2K1 pebbles altogether.
QED
YES!
The construction gets a bit more complicated.
Main idea: the key gets destroyed “gradually”.
The maximal number of trials that we can tolerate is approximately equal to
where:
u – the bound on communication plus storage
m – the size of the secret key
[Perito and Tsudik, ESORICS 2010]
device
verifier
securelink
memory
Goal: the verifier wants to make sure that the device has erased its memory.
a random string
of length M
device
verifier
R
memory M
R
the device proves the knowledge of R
We construct a “hash function”
H: {0,1}a{0,1}a
(where a is small)
that
a random string
of length a
device
verifier
X
memory M
Y
compute
Y := H(X)
(this overwrites M)
check if
Y = H(X)
advantage: communication from the verifier to the device is much shorter.
We use again the pyramid graph!
w – the length of the block
set K := M/w
let Kbe the number of the blocks.
Y=H(X)
Fact 1
this function can be computed in memory of length M
X
Using the “pebbling techniques” we can show that:
it is impossible to compute H in memory significantly smaller than M.
D., KazanaandWichs
KeyEvolution Schemes Resilient to SpaceBounded Leakage
(in submission)
We show keyevolution schemes secure in this model.
Thank you!