1 / 28

A Cryptographic Model for Access-Control

A Cryptographic Model for Access-Control. Work in progress. Also. Shai Halevi, Paul Karger, Dalit Naor. Information-flow Aspects of Cryptographic Models. Shai Halevi, Manoj Prabhakaran, Yael Tauman Kalai . A Typical Cryptographic Model. Reality…. This talk.

Download Presentation

A Cryptographic Model for Access-Control

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Cryptographic Model for Access-Control Work in progress Also Shai Halevi, Paul Karger, Dalit Naor Information-flow Aspects of Cryptographic Models Shai Halevi, Manoj Prabhakaran, Yael Tauman Kalai

  2. A Typical Cryptographic Model

  3. Reality…

  4. This talk • Trying to reconcile “trust models” in cryptography, access-control • Cryptographic models • Access-control models • Something in between • Case study: object storage • The issue: Distributed storage servers • The (obvious?) protocol: Capabilities • Delegation: problem and solution • What did we get?

  5. Models

  6. Crypto Models: Probabilistic Games • Participants • Honest players: run the prescribed protocol • Adversarial players: arbitrary (PPT) • In between … • Interfaces • Interaction between participants • Things that can be observed by environment • Rules of the game • Initial setup, scheduling, timing, randomness, …

  7. Real /Abstract World Paradigm [GMW86 … Ca01 …] • Real-world probabilistic game • The capabilities of an attacker • Abstract-world probabilistic game • The abstraction that we want to realize • The standard spell •  real-world adversary  abstract-world adversary s.t. the observable interfaces look the same • Similar to proving that a program meet its specification • But for the presence of the arbitrary adversaries

  8. An Example: Secure Channels [CK02] • Real world: picture from above • But players share secret keys • Abstraction: a tunnel through the network • But can drop messages, see when messages are sent (and their length) • We would prefer a “perfect tunnel” abstraction, but cannot realize it

  9. Implied Trust Model:Discretionary Access-Control • Secrets/objects/messages belong to users • Secret = something the user knows and does not • Users have discretion to control access to their objects • I’m willing to send my file to Joe, but not Dan • I’ll encrypt it so Dan cannot read • Once a secret got to , we lost the game • If Joe cooperates with , oops

  10. Mandatory Access Control • Secrets/objects belong to “the system” • secret = an object that is marked `secret’ (think `secret’ vs. `unclassified’) • Users have clearance levels • Labels define Information-flow limitations • A process of `unclassified’ user should never get a `secret’ object • That’s called confinement [La73]

  11. The Fallacy of Trusting High Clearance • “To give a secret object to Bill, you must trust that Bill is honest” • That’s a fine sentiment when Bill is a person • But the object is given to a computer process • Running on behalf of a `secret’ user ≠ not being malicious • Example: Bill edits the corporate strategy document (surely a `secret’ object) • Using MS-Word, infected with the latest virus

  12. Access-Control Policy for Confinement [BL73] [Wa74] • Reading `secret’ object requires a process with `secret’ clearance • More generally, process at level x can only read objects at levels x and below • A `secret’ process can only write `secret’ objects • More generally, process at level x can only write objects at level x • Bill’s MS-Word virus cannot leak the secret document by writing it to an `unclassified’ file

  13. Enforcing the Policy • We must have “trustworthy components” • Trustworthy = we really really really believe that they are not infected with viruses • Because they are small and simple, and we stared at their code long enough to be convinced that it is not buggy • They have to be at the entry-point to the network • A process with unmitigated Internet access cannot be confined

  14. Trustworthy Components • The OS kernel is not a good candidate • It’s typically quite large, complex • Usually not that hard to infect it with viruses • Neither is an application on top of the OS • Cannot be trusted more than their OS • Maybe special-purpose network cards • You can buy “evaluated” network cards today • Evaluated = someone went through the trouble of convincing a third-party examiner that there are no bugs • May include code proving

  15. The Modified Real-World Model Angel-in-a-box:small, simple, trustworthy

  16. Achieving Confinement • encrypts outgoing communication, decrypts incoming communication • E.g., using IPSec • Secret encrypts using the key of `secret’ • Unclassified is not given the key of `secret’ • Note: , ‘s can still communicate using timing / traffic-analysis • The current model does not deal with those • Sometimes they can be dealt with • Sometimes the application can live with them

  17. The Abstract-World Model • Similar changes, the ‘s have secure channels between them • Some subtleties • Treat traffic analysis as a resource • So abstract-world adversary cannot do “more traffic analysis” than in the real world • More interesting questions arise when dealing with more involved models • E.g., generic secure function evaluation

  18. Object Storage

  19. In The Beginning • There was local storage … • … and then file servers, • But the server was always too busy • being in the critical path of every I/O

  20. Storage Area Networks (SANs) • Many clients, many disks • Server may still keep track of what file goes where • + allocation tables, free space, etc. • But it is not on the critical path for I/O • No capacity for access control • Dumb disks: obey every command • A misbehaving client cannot be stopped

  21. Object Storage [CMU96…Go99] • Smarter disks: • Understand files (objects) • Know how to say no • But don’t have global view • Need “meta-data server” • knows what object goes where • decides on access-control • But not in the critical I/O path • Disks should enforce server’s decisions

  22. Capabilities [DvH66] • Client gets a “signed note” from server • aka capability • Disk verifies capabilitybefore serving command • Capabilities sent over • Over secure channels, so cannot get them • This (essentially) was the T10 proposed standard for object stores The holder ofthis note ishereby grantedpermission to read object #13 Good ol’ server

  23. Capabilities Cannot Enforce Confinement [Bo84,KH84] • Imagine a `Secret’ that wants to leak secrets to an `Unclassified’ • U gets write capability to unclassified object #7 • copies the capability itself into object #7 • S gets read capability to object #7 • reads the write capability off object #7 • uses write capability to copy secrets into object #7 • U gets read capability to object #7 • reads secrets off object #7 • The problem: unrestricted delegation

  24. Restricting Delegation • Tie capability to clients’ names (and authenticate clients) Client #176 ishereby grantedpermission to read object #13 • What else is there to say? • Controlled delegation is still possible • E.g., you can give a name to a group of clients Good ol’ server

  25. Security Proof • We want to prove security • Must formally specify real-world, abstraction • Real-world model is straightforward

  26. The Abstraction • Roughly, realized file-server abstraction • But not quite… • knows about the different disks • can do traffic analysis between server and disks • can block messages between server and disks • So access revocation does not work the same way

  27. Recap • Incorporated information-flow restrictions into cryptographic models • New dimension • Somewhat related to collusion-prevention [LMS ’04] • Many open questions • E.g., what functions can be computed “this way” • Application to the object-store protocol • Proposed standard did not address this issue • Modified protocol to get confinement

  28. Morals • We should strive to align cryptography with realistic trust models • E.g., try to keep secret keys in processes that can be trusted to keep them secret • When physical assumptions are needed, try to use reasonable ones • trusted network cards may be reasonable, “secure blobs” probably are not

More Related