1 / 17

Decentralized Robustness

Decentralized Robustness. Stephen Chong Andrew C. Myers Cornell University CSFW 19 July 6 th 2006. Information flow. Strong end-to-end security guarantees Noninterference [Goguen and Meseguer 1982] Enforceable with type systems [Volpano, Smith, and Irvine 1996] and many others

seda
Download Presentation

Decentralized Robustness

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Decentralized Robustness • Stephen Chong • Andrew C. Myers • Cornell University • CSFW 19 • July 6th 2006

  2. Information flow • Strong end-to-end security guarantees • Noninterference [Goguen and Meseguer 1982] • Enforceable with type systems • [Volpano, Smith, and Irvine 1996]and many others • But programs declassify information • Breaks noninterference! • Need semantic security conditions that hold in presence of declassification

  3. Robustness • Intuition: An attacker should not be able to control the release of information • Zdancewic and Myers [CSFW 2001] • Defined semantic security condition • An “active” attacker should not learn more information than a “passive” attacker. • Myers, Sabelfeld, and Zdancewic [CSFW 2004] • Language-based setting • Data to declassify and decision to declassify should not be influenced by attacker • Type system for enforcement • Track integrity • High-integrity = not influenced by attacker

  4. Issues with robustness Bob Charlie • May be many attackers • Different attackers have different powers • How to ensure a system is robust against an unknown attacker? • Different users trust different things Alice Damien

  5. Decentralized robustness • Define robustness against all attackers • Generalization of robustness • Not specialized for a particular attacker • Uses decentralized label model [Myers and Liskov 2000] to characterize the power of different attackers • Enforce robustness against all attackers • Complicated by unbounded, unknown attackers • Sound type system • Implemented in Jif 3.0 • Decentralized robustness = robustness + DLM

  6. Attackers • Language-based setting • An attacker A may • view certain memory locations • knows program text • inject code at certain program points • (and thus modify memory) • Power of attacker is characterized by security labels RA and WA • RA is upper bound on info A can read • WA is lower bound on info A can write • Defn: Program c has robustness w.r.t. attacks by A with power RA and WA if A’s attacks cannot influence information released to A. security lattice ℒ most restrictive RA WA least restrictive

  7. Example Alice Bob Charlie // Charlie can readpub, can’t readsalary[i], totalSalary, avgSalary // pub⊑ RCharlie avgSalary⋢ RCharlie … // Charlie can modify employeeCount // WCharlie⊑ employeeCount totalSalary = i = 0; while (i < employeeCount) { totalSalary += salary[i]; i += 1; } avgSalary := totalSalary / i; pub := declassify(avgSalary, (Alice or Bob) to (Alice or Bob or Charlie)); employeeCount := 1; from to

  8. Decentralized label model • Allows mutually distrusting principals to independently specify security policies [Myers and Liskov 2000] • This work: full lattice and integrity policies • Security labels built from reader policies and writer policies • Reader policies o→r • Owner o allows r (and implicitly o) to read information • Any principal that trusts o adheres to policy • (i.e., allows at most o and r to read) • Any principal not trusting o gives policy no credence • Confidentiality policies • Close reader policies under conjunction and disjunction

  9. Integrity policies • Writer policies o←w • Owner o allows w (and implicitly o) to have influenced (written) information • Any principal that trusts o adheres to policy • (i.e., allows at most o and w to have written) • Any principal not trusting o gives policy no credence • Integrity policies • Close writer policies under conjunction and disjunction

  10. Semantics of policies • Confidentiality • readers(p, c) is set of principals that principal p allows to read based on confidentiality policy c • c is no more restrictive than d (written c ⊑Cd) if for all p, readers(p, c) ⊇ readers(p, d) • ⊑C forms a lattice: meet is disjunction, join is conjunction • Integrity • writers(p, c) is set of principals that principal p has allowed to write based on integrity policy c • c is no more restrictive than d (written c ⊑Id) if for all p, writers(p, c) ⊆ writers(p, d) • ⊑I forms a lattice: meet is conjunction, join is disjunction • Dual to confidentiality

  11. Labels • A label 〈c, d〉 is a pair of a confidentiality policy c and an integrity policy d • 〈c, d〉 ⊑ 〈c′, d′〉 if and only if c ⊑Cc′ andd ⊑Id′ • Labels are expressive and concise language for confidentiality and integrity policies

  12. Attacker power in the DLM • For arbitrary principals p and q, need to describe what p believes is the power of q • Define label Rp→q as least upper bound of labels that p believes q can read. • ℓ ⊑ Rp→qif and only if q ∈readers(p,ℓ) • Define label Wp←q as greatest lower bound of labels that p believes q can write. • Wp←q ⊑ ℓif and only if q ∈writers(p,ℓ) Rp→q Wp←q

  13. Robustness against all attackers • Defn: Command c has robustness against all attackers if: • for all principals p and q, • c has robustness with respect to • attacks by q with power Rp→q and Wp←q

  14. Enforcement • Enforcing robustness [Myers, Sabelfeld, Zdancewic 2004] • “If declassification gives attacker A info, then A can’t influence data to declassify, or decision to declassify.” • Enforcing robustness against all attackers • “For all p and q, if p believes declassification gives q info, then p believes q can’t influenced data to declassify, or decision to declassify.” • More formally: • For all principals p and q, • if ℓfrom ⋢ Rp→qand ℓto ⊑ Rp→qthen • Wp←q ⋢pcandWp←q ⋢ ℓfrom • Can’t use MSZ type system for all possible attackers • Would require different type system for each p and q!

  15. A sound unusable typing rule ℓto ⊔ pc⊑ Γ(v) Γ, pc ⊢ e:ℓfrom ∀p, q. if ℓfrom ⋢ Rp→qand ℓto ⊑ Rp→qthenWp←q ⋢pc ∀p, q. if ℓfrom ⋢ Rp→qand ℓto ⊑ Rp→qthenWp←q ⋢ ℓfrom Γ, pc ⊢ v := declassify(e, ℓfrom to ℓto) ⇒ For all principals p and q, if ℓfrom ⋢ Rp→qand ℓto ⊑ Rp→qthen Wp←q ⋢pcandWp←q ⋢ ℓfrom

  16. Sound typing rule ℓto ⊔ pc⊑ Γ(v) Γ, pc ⊢ e:ℓfrom ℓfrom ⊑ ℓto ⊔ writersToReaders(pc) ℓfrom ⊑ ℓto ⊔ writersToReaders(ℓfrom) Γ, pc ⊢ v := declassify(e, ℓfrom to ℓto) • Conservatively converts writers of a label into readers. • Used to compare integrity against confidentiality. • ∀ℓ.∀p. writers(p, ℓ)⊆ • readers(p, writersToReaders(ℓ)) ⇒ For all principals p, • readers(p, ℓfrom) ⊇ readers(p, ℓto) ∩ writers(p, pc) • and readers(p, ℓfrom) ⊇ readers(p, ℓto) ∩ writers(p, ℓfrom) ⇒ For all principals p and q, if ℓfrom ⋢ Rp→qand ℓto ⊑ Rp→qthen Wp←q ⋢pcandWp←q ⋢ ℓfrom

  17. Conclusion Damien • Decentralized robustness = robustness + DLM • Defined robustness against all attackers • Semantic security condition • Generalizes robustness to arbitrary attackers • Decentralized label model expresses attackers’ powers • Sound type system • Implemented in Jif 3.0 • Available at http://www.cs.cornell.edu/jif • Paper also considers downgrading integrity • Qualified robustness [Myers, Sabelfeld, and Zdancewic 2004] is generalized to qualified robustness against all attackers

More Related