1 / 61

One-Time Computable Self -Erasing Functions

One-Time Computable Self -Erasing Functions. Stefan Dziembowski Tomasz Kazana Daniel Wichs. (accepted to TCC 2011) . Main contribution of this work. We introduce a new model for leakage/tamper resistance . In our model the adversary is space-bounded .

shirin
Download Presentation

One-Time Computable Self -Erasing Functions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. One-Time ComputableSelf-Erasing Functions Stefan Dziembowski Tomasz Kazana Daniel Wichs (accepted to TCC 2011)

  2. Main contribution of this work We introduce a new model for leakage/tamper resistance. In our model the adversary is space-bounded. We present some primitives that are secure in this model. Applications: password-protected storage, proofs-of-erasure.

  3. How to construct secure digital systems? MACHINE (PC, smartcard, etc.) very secure Security based on well-defined mathematical problems. implementation CRYPTO not secure!

  4. The problem MACHINE (PC, smartcard, etc.) easy to attack implementation hard to attack CRYPTO

  5. Machines cannot be trusted! 1. Informationleakage MACHINE (PC, smartcard, etc.) 2. Maliciousmodifications

  6. Relevant scenarios MACHINES . . . PCs specialized hardware • malicious software: • viruses, • trojan horses. • side-channel attacks: • power consumption, • electromagnetic leaks, • timing information.

  7. A recent trend in cryptography Construct protocols that are secure even if they are implemented on machines that are not fully trusted. [ISW03, GLMMR04, MR04, IPW06, CLW06, Dzi06, DP07, DP08, AGV09, ADW09, Pie09, NS09, SMY09, KV09, FKPR10, DDV10, DPW10, DHLW10, BKKV10, BG10,…]

  8. Main idea of this line of research To achieve security one assumes that the power of the adversary during the “physical attack” is “limited in some way”. this should be justified by some physical characteristics of the device

  9. Examples of assumptions (1/3) the adversary can learn the values on up to t wires length-shrinking h(S) booleancircuit S “Probing Attacks” [ISW03] Bounded-Retrieval Model “Memory Attacks” [AGV09]

  10. Examples of assumptions (2/3) h(S0) h(S1) h(S) length-shrinking low-complexity h length-shrinking h length-shrinking h S S0 S1 [FRTV10,DDV10] [MR04,DP08…]

  11. Examples of assumptions (3/3) the adversary can modify up to t wires booleancircuit [IPSW03]

  12. One way to look at these efforts: The trust assumptions on hardware can never be removed completely. But we can try to reduce them.

  13. General goal Come up with attack models that are: • realistic (i.e. they correspond to the real-life adversaries), • allow to construct secure schemes tradeoff Problem: current models are not strong enough. Example: BRM --- the adversary is assumed to be passive.

  14. Outline • Introduction and motivation • Our model • One-time computable functions • Proofs of erasure • Subsequent work and open problems

  15. Our model We work in the “virus model” (but our techniques may also be used to protect against the side-channel attacks) We assume that the adversary is active: small (the “virus”) big interacts modifies the internal data

  16. The model device small big send/receive read / write memory

  17. What are the restrictions on interaction? small (the “virus”) big send/receive There is a limit ton the number of bits that the virus can send out. (this is essentially the assumption used before in the BRM).

  18. What are the restrictions on malicious modifications? The virus can modify the contents of the memory arbitrarily. The only restriction is that he is space-bounded. small (the “virus”) read / write memory

  19. Outline • Introduction and motivation • Our model • One-time computable functions • Proofs of erasure • Subsequent work and open problems

  20. Our contribution In this model we construct a primitive: one-time-computable pseudorandom functions f : keys × messages ciphertexts message M key R ciphertextC=f(R,M) Informally: “it should be possible to evaluate f(R,M) at most once”. Normally |R| >> |M| (= |C|)

  21. the “ideal functionality”: message M ciphertextC=f(R,M) key R message M’ error • In our model: send/receive can only learn one value of f(R,K) read / write key R

  22. Some more details A1: extra space for the adversary A0: for the honest scheme key R memory • Main idea: design fsuch that: • the computation of f(R,M)twice takes more space than • |A0| + |A1| + |R|. • but it can be done efficiently once in space • |A0| + |R|. • hence we can compute f(R,M) exactly once (during this computation we will overwrite R).

  23. Observation If |A1| ≥ |R| then: the adversary can copy M into A1: copy A1: extra space for the adversary A0: for the honest scheme key R Then, he can simply run the honest scheme on the “copy of R”. He can obviously do it multiple times. • Moral • A1 has to be shorter than R.

  24. A simplifying assumption In our schemes A0will be very short. Therefore we can forget about (include it into the space for the adversary). So, the memory looks now like this : A: space for the adversary key R

  25. An application of this primitive Password-protected storage: f – a one-time computablePRF (Enc, Dec) – a standard symmetric encryption scheme To encrypt a message M with a password π: • select R at random • calculate Z= f(R,π) • store (R, Enc(Z,M)) Note: this will overwrite R To decrypt compute: Z =f(R, π) and then Dec(Z,M) M key R C=Enc(Z,M) π

  26. Problem Chas to be shorter than R. (since the adversary can use part of the space where C is stored as his memory). A solution: store Con a read-only memory.

  27. Another problem If an honest user makes a typo then he will not have another try. We have a solution for this – stay tuned.

  28. Yet another problem select Rat random calculate Z= f(R, π) store(R, Enc(Z, M)) Look again at this procedure: Can it be done “locally” on this machine? • Looks problematic, since the calculation of Z will destroy R. select a short seed Sat random and store it set R := PRG(S) calculate Z= f(R, π) (destroying R) recalculate R := PRG(S) erase S store(R, Enc(Z, M)) • Solution:

  29. Can we prove anything in our model? It seems like we do not have the right tools in complexity theory to prove anything in a plain model. Our solution: use the random oracle model small big oracle with a hash function H

  30. Using ROM in this context is delicate Example: H – hash function, M – long message In ROM computing H(M) requires the adversary to store entire M first. In real-life --- not necessarily: If H is constructed using Merkle-Damgardthen H(M) can be computed “on fly”: H(M) M

  31. Our solution Assume that Random Oracle works only on messages of small length: H: {0,1}cw -> {0,1}w (for a small c) Typically: c = 2 In this case H is just the compression function. this will be our main building-block H(m||m’) m m’

  32. Our functions will always correspond to a graph f: output H(m||m’) m m’ input

  33. Our PRF is based on a pyramid graph output (the ciphertext): the key: R = R1 R2 R3 R4 R5 the message: M the hash function: H(m||m’) R5 R1 R2 R4 R1 R2 R3 R4 R5 R3 m m’ M

  34. Our theorem (informally) send/receive t bits A: for adv. key R f read / write If |A|+ t < |R|- εthen the adversary will never learn f(R,M)and f(R,M’) for M ≠ M’ M R memory

  35. So, how to prove the security? We use a technique called graph pebbling there is a vast literature on this. See e.g.: John E. Savage. Models of Computation: Exploring the Power of Computing. 1997. We use techniques introduced in: Dwork, Naor and Wee Pebbling and Proofs of Work, CRYPTO 2005.

  36. Graph pebbling “output vertices” a DAG: “input vertices” Intuition: there is a pebble on a vertex v if the corresponding block is in the memory. In the initial configuration there is a pebble on every input vertex.

  37. The rules of moving the pebbles • there are up to B pebbles • if all the children of vcarry a pebble, we can put a pebble onv • a pebble can be removed from every vertex • Goal: pebble every output vertex.

  38. Fact[Dwork-Naor-Wee 1995] f: R1 R2 R3- read / write w– length of the block the graph corresponding to fcannot be pebbled with Tpebbles R1 R2 R3- memory - fcannot cannot be computed in memory ≈ wT implies

  39. But our model is more complicated… send/receive read / write The adversary can also send data to an external adversary that is notspace-bounded. Our solution: we introduce special red-pebbles. The “old” pebbles will be called black. memory

  40. New rules • If there is a black pebble on v then we can put a red pebble on it. • if all the children ofvcarry (red or black) pebble, we can put a black pebble onv. • if all the children of vcarry ared pebble then we can put a red pebble on it. • a black pebble can be removed from every vertex (there is no need to remove the red pebbles) Definition: a vertex va “heavy pebble” if it is a black pebble, or a red pebble generated by Rule 1. • Goal: put a black or red pebble on every output vertex.

  41. The new restriction We require that at any point of the game the number of the heavy pebbles is at most U(where Uis some parameter). Intuition: The only things that costs are: • the black pebbles ≈ “memory” • transforming a black pebble into a red one ≈ “communication”

  42. Fact that we prove w– length of the block • fcannot cannot be computed if the sum of • the memory size and • the number of sent bits • is ≈ wU the graph corresponding to fcannot be pebbled withU heavy pebbles implies

  43. Now, recall what we want to prove f send/receive t bits A: for adv. key R read / write M R If |A|+ t < |R|- εthen the adversary will never learn f(R,M)and f(R,M’) for M ≠ M’ memory How does it translate into “pebbling”?

  44. The pebbling problem R1 R2 RK M M’ It is impossible to pebble both outputs with less than 2K-1 heavy pebbles.

  45. A definition We say that the output of the graph is input-dependent if after removing all the pebbles from the input it’s impossible to pebble the output: impossible possible

  46. Lemma 1 If the output is input-dependent then the number of heavy pebbles is at least K. Proof by induction on K. Base case K=2is trivial:

  47. let y denote the number of heavy pebbles transform the configuration by: putting on the second row black pebbles that are reachable from the first row removing the pebbles from the first row Suppose the hypothesis holds for some K-1 We show it for K: observations: the “new” configuration is input-dependent y ≤ x-1 suppose in this configuration: the output is input-dependent there are x heavy pebbles From the induction hypothesis: y ≥ K-1 x≥ K QED

  48. Lemma 2 In the first configuration that is input-independent there are at least K-1 heavy pebbles. Proof A configuration can become input-independent only because of moves of this type. Therefore there new configuration has to “depend on the second row”. So, it needs to have at least K-1 heavy pebbles. QED

  49. Now, look again at the graph there need to be at least K-1 heavy pebbles here R1 R2 RK M’ M and at least K heavy pebbles in the rest of the graph. Suppose the left graph becomes input independent first. So, there are at least 2K-1 pebbles altogether. QED

  50. In the “password-protected storage”:Can we allow more than one trial? YES! The construction gets a bit more complicated. Main idea: the key gets destroyed “gradually”. The maximal number of trials that we can tolerate is approximately equal to where: u – the bound on communication plus storage m – the size of the secret key

More Related