1 / 31

Enforcing Confidentiality in Low-level Programs

Enforcing Confidentiality in Low-level Programs. Andrew Myers Cornell University. End-to-end security?. Complex computations, untrusted components Security mechanisms: access control firewalls encryption digital signing static verification mandatory controls

grace
Download Presentation

Enforcing Confidentiality in Low-level Programs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Enforcing Confidentiality in Low-level Programs Andrew Myers Cornell University

  2. End-to-end security? • Complex computations, untrusted components • Security mechanisms: access control firewalls encryption digital signing static verification mandatory controls • Security guarantees: limited, not compositional

  3. End-to-end policies • End-to-end confidentiality policy: “The information in this file is confidential and should only be released only to users U1 and U2” • End-to-end integrity policy: “This information is trusted and should only be affected by users U1 and U2” • Not access control — information flow control

  4. Noninterference • “High” inputs cannot affect “low” observables • Confidentiality: high = confidential, low = public • Integrity: low = trusted, high = untrusted • Can enforce using static type system : cheap, relatively accurate H1 L1 H3 L1 L L2 H4 L2 H2

  5. Static checking • Expressions have associated static label • Label captures part of security policy for labeled data • L1 L2 means policy L2 is at least as restrictive as L1 int{L1} x; int{L2} y; x = y; requires L2  L1 • Alsoneed program-counter label to catch implicit flows

  6. Demo: Jif • Java + Information Flow • security-typed language • fine-grained information flow checking • full-featured language: objects, classes, exceptions • genericity with respect to security • declassification • static and dynamic security policies • Implemented using JLtools extension toolkit • 6k semis • Nate Nystrom, Lantian Zheng, Steve Zdancewic

  7. Challenges • Real computations must leak information • must go beyond noninterference • robust declassification • Enforcement for low-level programs • first-class linear control • Distributed computation/untrusted hosts • Timing channels/multithreaded programs

  8. Declassification • Declassification (downgrading) : reduction of the confidentiality level of data • Allows the intentional release of confidential information (relax noninterference) L1 L3L1 declassify L2 L3

  9. Password Example public query confidential password Password Checker declassification inside public result

  10. Laundering Attack public query confidentialpassword copy Password Checker declassification inside secret Secret copiedinto password isleaked! leaked secret

  11. Robust declassification • Intuition: a system is robust against attackers if declassification only releases information intentionally • Goal: • Characterize the end-to-end security of a system containing declassification • Define conditions that prevent unintentional leaks

  12. A simple system model • A system S is: S is a set of states: s1, s2, …  is a transition relation inSS

  13. Views • view of (S, ): equivalence relation on S • Captures what a viewer (user, attacker) can see directly about a state s • Generalization of “security level”  

  14. Example views S = String ×Integer (x,i) I (y,j) iffi = j "integer component is visible" ("attack at dawn", 3) I ("retreat", 3) ("attack at dawn", 3) I ("retreat", 4) (x,i) (y,j) iff (x,i) =(y,j)“complete view (no secrets)”

  15. Trace equivalence A view induces an observation of a trace: t1 = ("x",1)("y",1)("z",2)("z",3) t1through view I 1  1  2  3 t2 = ("a",1)("b",2)("z",2)("c",3) t2through view I 1  2  2  3 …these traces are equal modulo stuttering

  16. Observational Equivalence observational equivalence : S[] • S[] s'if the traces from s look the same as the traces from s' through the view.  

  17. Observational Equivalence observational equivalence : S[] • S[] s' if the traces from s look the same as the traces from s' through the view . S[] S[]

  18. Secure Systems • The view S[] captures what can be learned by observing execution of S through view  • S is -secure if  = S[] • “Nothing more is learned from observing what the program does” • simple formulation of noninterference • Difference between  and S[] captures intentional information release when S contains declassification

  19. Attacks • Passive attack : observation • Active attack : modification AnA-attack is a systemA = (S, A) such thatA = A[A] A: the attacker's view A: a set of additional transitions A = A[A]: “attacker can’t construct attack that uses information he can’t observe”

  20. Laundering attack A yes A A A no secret password data

  21. Laundering attack A yes S[A] S[A] A no secret password data

  22. Effect of laundering attack A yes S[A] (SA)[A] A no secret password data Observation of attacked system gives more information: Not robust

  23. Robustness • A system is secure against a passive attack A if S[A] = A • A system is robust against an active attack A if (SA)[A] S[A] “The attacker learns at most as much from observing the attacked system as from passive observation”

  24. Summary, part 1 • Robustness: new building block for characterizing end-to-end system security in presence of declassification and active attack[CSFW01] • Next step: formulate static typing rules that enforce robustness (as well as noninterference)

  25. Verifying programs Jif program jif Java program annotations javac Secure? bytecode annotations JIT machine code annotations

  26. Implicit flows at source level boolean{H}b; boolean{L}x = false; if (b) { x = true; /* not OK */ } • Implicit flow: information carried through control structure • Solution: introduce static approximation to implicit flow (pc) • Type of every expression acquires pc boolean{H}b; boolean{L}x = falseL;if (b) { x = trueH; }

  27. Implicit flow in low-level lang. • High-level control structures (if, while, switch, function calls, returns) indirect, direct jumps • Less ability to reason about implicit flow • Simple rule: pc at target of jump always more secret than at jump instruction • too restrictive • doesn’t handle indirect jumps (return, method call, switch)

  28. Loss of precision boolean{H}b; /* pc = L */ boolean{L}x = falseL; /* pc = L */ if (b) { f() } /* pc = H */ x = trueL; /* pc = L */ MOV x, 0 ; pc = L CMP b, 0 ; pc = L JZ skip ; pc = H CALL f ; pc = H skip: MOV x, 1H ; pc = H High-level: safe Low-level: apparently unsafe

  29. Security-typed IL • First low-level typed language with support for dynamic control transfers, static information flow control [ESOP’01] • Continuations in A-normal form: close to assembly code • Linear continuations preserve precision ofhigh-level source analysis :first-class postdominators • First proof of language-basedenforcement of noninterference(for any language with state andhigher-order control) e ::= let x = prim in e | if v then e1 else e2| let x = refls v in e | set v1:= v2 in e| letlin y = lv in e | goto v1(v2, lv) | lgoto lv1 v

  30. Summary, part 2 • Source language can be compiled to a low-level language without loss of precision • Next step: information flow verification for machine code

  31. Conclusions • New practical and theoretical tools for enforcing end-to-end security • Language-based approaches (type-checking) leverage progress in PL area for systems issues • Next: validation of end-to-end security properties for large systems?

More Related