1 / 27

Eran Tromer

Information Security – Theory vs. Reality 0368-4474-01, Winter 201 2 -2013 Lecture 8: Integrity on untrusted platforms: Proof-Carrying Data. Eran Tromer. Recall our high-level goal. Ensure properties of a distributed computation when parties are mutually untrusting ,

nieve
Download Presentation

Eran Tromer

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Information Security – Theory vs. Reality 0368-4474-01, Winter 2012-2013Lecture 8: Integrity on untrusted platforms:Proof-Carrying Data EranTromer

  2. Recall our high-level goal Ensure properties of adistributed computationwhen parties aremutually untrusting, faulty, leaky&malicious.

  3. Approaches • Assurance using validation, verification and certification • Attestation using Trusted Platform Module • Cryptographic protocols • Multiparty computation • Proofs of correctness(“delegation of computation”)

  4. Toy example (3-party correctness) Alice Carol Bob x, F G y z is “z=G(F(x))” true? yF(x) zG(y)

  5. Trivial solution Carol can recompute everything, but: Uselessly expensive Requires Carol to fully know x,F,G We will want to represent these via short hashes/signatures Bob Alice Carol y z zG(y) z’G(F(x)) z’ = z yF(x) ?

  6. Secure multiparty computation[GMW87][BGW88][CCD88] Alice Carol Bob x, F G zG(y) yF(x) Preserves integrity, and even secrecy. :

  7. Secure multiparty computation[GMW87][BGW88][CCD88] Alice Carol Bob x, F G zG(y) yF(x) But: • computational blowup is polynomial in the whole computation, and not in the local computation • computation (F and G) must be chosen in advance • does not preserve the communication graph:parties must be fixed in advance, otherwise…

  8. Secure multiparty computation[GMW87][BGW88][CCD88] Alice Bob x, F G Carol #1 zG(y) yF(x) Carol #2 Carol #3 ... must pre-emptively talk to everyone on the Internet!

  9. Computationally-sound (CS) proofs [Micali 94] Alice Carol Bob x, F G y z verify z yF(x) zG(y) z zprove(“z=G(F(x))”) z=G(F(x)) • Bob can generate a proof string that is: • Tiny (polylogarithmic in his own computation) • Efficiently verifiable by Carol However, now Bob recomputes everything...

  10. Proof-Carrying Data[Chiesa Tromer 09]following Incrementally-Verifiable Computation[Valiant 08] Alice Carol Bob x, F G y z verify z zG(y) yF(x) y z y=F(x) z=G(y)and I got a valid proof that “y=F(x)” • Each party prepares a proof string for the next one. • Each proof is: • Tiny (polylogarithmic in party’s own computation). • Efficiently verifiable by the next party.

  11. Generalizing:TheProof-Carrying Dataframework

  12. Generalizing: distributed computations Distributed computation: Parties exchange messages and perform computation. m3 m1 m6 mout m4 m2 m7 m5

  13. Generalizing: arbitrary interactions Arbitrary interactions communication graph over time is any DAG m3 m1 m6 mout m4 m2 m7 m5

  14. Generalizing: arbitrary interactions Computation and graph are determined on the fly by each party’s local inputs: human inputs randomness program m3 m1 m6 mout m4 m2 m7 m5

  15. Generalizing: arbitrary interactions Computation and graph are determined on the fly by each party’s local inputs: human inputs randomness program How to define correctness of dynamic distributed computation? m3 m1 m6 mout m4 m2 m7 m5

  16. C-compliance System designer specifies his notion of correctness via a compliance predicateC(in,code,out) that must be locally fulfilled at every node. code (program, human inputs, randomness) C accept / reject C-compliant distributed computation in out m3 m1 m6 mout m4 m2 m7 m5

  17. Examples of C-compliance correctness is a compliance predicateC(in,code,out) that must be locally fulfilled at every node Some examples: C = “the output is the result of correctly computing a prescribed program” C = “the output is the result of correctly executing some programsigned by the sysadmin” C = “the output is the result of correctly executing sometype-safe program” or “… program with a valid formal proof” C C C m3 m1 m6 m4 mout m7 m2 m5

  18. Dynamically augment computation with proofs strings In PCD, messages sent between parties are augmented with concise proof strings attesting to their “compliance”. Distributed computation evolves like before, except that each party also generates on the fly a proof string to attach to each output message. C m33 m11 m66 moutout m44 m22 m77 m55

  19. Goals Allow for any interaction between parties Preserve parties’ communication graph no new channels Allow for dynamic computations human inputs, indeterminism, programs Blowup in computation and communication is local and polynomial Ensure C-compliance while respecting the original distributed computation.

  20. Application:Fault and leakage resilient Information Flow Control

  21. Application:Fault and leakage resilient Information Flow Control Computation gets “secret” / “non-secret” inputs “non-secret” inputs are signed as such Any output labeled “non-secret” must be independent of secrets System perimeter is controlled and all output can be checked (but internal computation can be leaky/faulty). C allows only: Non-secret inputs:Initial inputs must be signed as “non-secret”. IFC-compliant computation:Subsequent computation respectInformation Flow Control rules and follow fixed schedule Censor at system’s perimeter inspects all outputs: Verifies proof on every outgoing message Releases only non-secret data.

  22. Application:Fault and leakage resilient Information Flow Control Computation gets “secret” / “non-secret” inputs “non-secret” inputs are signed as such Any output labeled “non-secret” must be independent of secrets System perimeter is controlled and all output can be checked (but internal computation can be leaky/faulty). C allows only: Non-secret inputs:Initial inputs must be signed as “non-secret”. IFC-compliant computation:Subsequent computation respectInformation Flow Control rules and follow fixed schedule Censor at system’s perimeter inspects all outputs: Verifies proof on every outgoing message Releases only non-secret data. Big assumption, but otherwise no hope for retroactive leakage blocking (by the time you verify, the EM emanations are out of the barn). Applicable when interface across perimeter is well-understood (e.g., network packets). Verify using existing assurance methodology.

  23. Application:Simulations and MMO Distributed simulation: Physical models Virtual worlds (massively multiplayer online virtual reality) How can participants prove they have “obeyed the laws of physics”?(e.g., cannot reach through wall into bank safe) Traditional: centralized. P2P architectures strongly motivated but insecure [Plummer ’04] [GauthierDickey et al. ‘04] Use C-compliance to enforce the laws of physics.

  24. Application:Simulations and MMO – example Alice and Bob playing on an airplane, can later rejoin a larger group of players, and prove they did not cheat while offline. m,  m,  m,  m,  “While on the plane, I won a billion dollars, and here is a proof for that” m, 

  25. More applications Mentioned: Fault isolation and accountability, type safety, multilevel security, simulations. • Many others: • Enforcing rules in financial systems • Proof-carrying code • Distributed dynamic program analysis • Antispam email policies • Security design reduces to “compliance engineering”:write down a suitable compliance predicate C. • Recurring patterns:signatures, censors, verify-code-then-verify-result… • Introduce design patterns(a la software engineering) [GHJV95]

  26. Design patterns use signatures to designate parties or properties universal compliance: takes as input a (signed) description of a compliance predicate user inputs into local computation keep track of counters / graph structure (maybe mention before examples? some of these may be used there)

  27. Does this work? Established:Formal framework • Explicit construction • “Polynomial time” - not practically feasible (yet). • Requires signature cards • Ongoing and future work: • Full implementation • Practicality • Reduce requirement for signature cards (or prove necessity) • Extensions (e.g., zero knowledge) • Interface with complementary approaches: tie “compliance” into existing methods and a larger science of security • Applications and “compliance engineering” methodology

More Related