1 / 30

EE515/IS523 Think Like an Adversary Lecture 5 Access Control in a Nutshell

This lecture provides an overview of access control mechanisms, including challenge-response authentication and key establishment. It also discusses the differences between Kerberos, PKI, and IBE, and explores the concepts of public key certificates and ID-based cryptography. Additionally, the lecture covers OS security and the importance of user authentication, access control, protection, and isolation.

wnieves
Download Presentation

EE515/IS523 Think Like an Adversary Lecture 5 Access Control in a Nutshell

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. EE515/IS523 Think Like an AdversaryLecture 5Access Control in a Nutshell Yongdae Kim

  2. Recap • http://security101.kr • E-mail policy • Include [ee515] or [is523] in the subject of your e-mail • Student Survey • http://bit.ly/SiK9M3 • Student Presentation • Send me email. • Preproposal deadline: This Wednesday 9:00 AM

  3. Challenge-response authentication • Alice is identified by a secret she possesses • Bob needs to know that Alice does indeed possess this secret • Alice provides responseto a time-variant challenge • Response depends on both secret and challenge • Using • Symmetric encryption • One way functions

  4. Challenge Response using SKE • Alice and Bob share a key K • Taxonomy • Unidirectional authentication using timestamps • Unidirectional authentication using random numbers • Mutual authentication using random numbers • Unilateral authentication using timestamps • Alice  Bob: EK(tA, B) • Bob decrypts and verified that timestamp is OK • Parameter Bprevents replay of same message in B  A direction

  5. Challenge Response using SKE • Unilateral authentication using random numbers • Bob  Alice: rb • Alice  Bob: EK(rb, B) • Bob checks to see if rb is the one it sent out • Also checks “B” - prevents reflection attack • rb must be non-repeating • Mutual authentication using random numbers • Bob  Alice: rb • Alice  Bob: EK(ra, rb, B) • Bob  Alice: EK(ra, rb) • Alice checks that ra, rb are the ones used earlier

  6. Challenge-response using OWF • Instead of encryption, used keyed MAC hK • Check: compute MAC from known quantities, and check with message • SKID3 • Bob  Alice: rb • Alice  Bob: ra, hK(ra, rb, B) • Bob  Alice: hK(ra, rb, A)

  7. Key Establishment, Management • Key establishment • Process to whereby a shared secret key becomes available to two or more parties • Subdivided into key agreement and key transport. • Key management • The set of processes and mechanisms which support key establishment • The maintenance of ongoing keying relationships between parties

  8. Access Control in a Nutshell Yongdae Kim

  9. Kerberos vs. PKIvs. IBE • Still debating  • Let’s see one by one!

  10. A, B, NA EKBT(k, A, L), EKAT(k, NA, L, B) EKBT(k, A, L), Ek(A, TA, Asubkey) Ek(TA, Bsubkey) Kerberos (cnt.) T • EKBT(k, A, L): Token for B • EKAT(k, NA, L, B): Token for A • L: Life-time • NA? • Ek(A, TA, Asubkey): To prove B that A knows k • TA: Time-stamp • Ek(B, TA, Bsubkey): To prove A that B knows k B A

  11. EKAG(kAB, NA’, L, B), EkGB(kAB, A, L, NA’), B, NA’ EKGT(kAG, A, L), EKAT(kAG, NA, L, G) A, G, NA EKGT(kAG, A, L), EkAG(A, TA), B, NA’ EKGB (kAB, A, L, NA’), EkAB(A, TA’, Asubkey) Ek(TA’, Bsubkey) Kerberos (Scalable) T (AS) G (TGS) B A

  12. Public Key Certificate • Public-key certificates are a vehicle • public keys may be stored, distributed or forwarded over unsecured media • The objective • make one entity’s public key available to others such that its authenticity and validity are verifiable. • A public-key certificate is a data structure • data part • cleartext data including a public key and a string identifying the party (subject entity) to be associated therewith. • signature part • digital signature of a certification authority over the data part • binding the subject entity’s identity to the specified public key.

  13. CA • a trusted third party whose signature on the certificate vouches for the authenticity of the public key bound to the subject entity • The significance of this binding must be provided by additional means, such as an attribute certificate or policy statement. • the subject entity must be a unique name within the system (distinguished name) • The CA requires its own signature key pair, the authentic public key. • Can be off-line!

  14. ID-based Cryptography • No public key • Public key = ID (email, name, etc.) • PKG • Private key generation center • SKID = PKGS(ID) • PKG’s public key is public. • distributes private key associated with the ID • Encryption: C= EID(M) • Decryption: DSK(C) = M

  15. Discussion (PKI vs. Kerberos vs. IBE) • On-line vs. off-line TTP • Implication? • Non-reputation? • Revocation? • Scalability? • Trust issue?

  16. OS Security • OS Security is essentially concerned with four problems: • User authentication links users to processes. • Access control is about deciding whether a process can access a resource. • Protection is the task of enforcing these decisions: ensuring a process does not access resources improperly. • Isolation is the separation of processes’ resources from other processes.

  17. Access Control • The OS mediates access requests between subjects and objects. • This mediation should (ideally) be impossible to avoid or circumvent. ? Object Subject Reference monitor

  18. Definitions • Subjects make access requests on objects. • Subjects are the ones doing things in the system, like users, processes, and programs. • Objects are system resources, like memory, data structures, instructions, code, programs, files, sockets, devices, etc… • The type of access determines what to do to the object, for example execute, read, write, allocate, insert, append, list, lock, administer, delete, or transfer

  19. Access Control • Discretionary Access Control: • Access to objects (files, directories, devices, etc.) is permitted based on user identity • Each object is owned by a user. • Owners can specify freely (at their discretion) how they want to share their objects with other users, • by specifying which other users can have which form of access to their objects. • Discretionary access control is implemented on any multi-user OS (Unix, Windows NT, etc.). • Mandatory Access Control: • Access to objects is controlled by a system-wide policy • for example to prevent certain flows of information. • In some forms, the system maintains security labels for both objects and subjects • based on which access is granted or denied. • Labels can change as the result of an access • Security policies are enforced without the cooperation of users or application programs. • Mandatory access control for Linux: http://www.nsa.gov/research/selinux/

  20. Access Control Matrix

  21. Representations • An access control matrix can be represented internally in different ways: • Access Control Lists (ACLs) store the columns with the objects • Capability lists store the rows with the subjects • Role-based systems group rights according to the “role” of a subject.

  22. Access Control Lists • The ACL for an object lists the access rights of each subject (usually users). • To check a request, look in the object’s ACL. • ACLs are used by most OSes and network file systems, e.g. NT, Unix, and AFS.

  23. ACL Problems • To be secure, the OS must authenticate that the user is who (s)he claims to be. • To revoke a user’s access, we must check every object in the system. • There is often no good way to restrict a process to a subset of the user’s rights.

  24. Capabilities • Capabilities store the allowed list of object accesses with each subject. • When the subject requests access to object O, it must provide a “ticket” granting access to O. • These tickets are stored in an OS-protected table associated to each process. • No widely-used OS uses pure capabilities. • Some systems have “capability-like” features: e.g. Kerberos, NT, OLPC, Android

  25. ACL vs. Capabilities • Capabilities do not require authentication: the OS just checks each ticket on access requests. • Capabilities can be passed, or delegated, from one process to another. • We can limit the privileges of a process, by removing unnecessary tickets from the table.

  26. Roles … … S1 S2 S3 Sm S1 S2 S3 Sm R1 R2 O1 O2 … On O1 O2 … On

  27. Unix/POSIX Access Control kyd@dio (~) % id uid=3259(kyd) gid=717(faculty) groups=717(faculty),1686(mess),1847(S07C8271),1910(F07C5471),2038(S08C8271) kyd@dio (~) % ls -l News_and_Recent_Events.zip -rw-rw-rw- 1 kyd faculty 714904 Feb 22 10:00 News_and_Recent_Events.zip kyd@dio (/web/classes02/Spring-2011/csci5471) % ls –al drwxrwsr-x 4 kyd S11C5471 512 Jan 19 10:23 ./ drwxr-xr-x 46 root daemon 1024 Feb 17 23:04 ../ drwxrwsr-x 3 kyd S11C5471 512 Feb 16 00:36 Assignment/

  28. Mandatory Access Control policies • Restrictions to allowed information flows are not decided at the user’s discretion (as with Unix chmod), but instead enforced by system policies. • Mandatory access control mechanisms are aimed in particular at preventing policy violations by untrusted application software, which typically have at least the same access privileges as the invoking user.

  29. Data Pump/Data Diode • Like “air gap” security, but with one-way communication link that allow users to transfer data from the low-confidentiality to the high- confidentiality environment, but not vice versa. • Examples: • Workstations with highly confidential material are configured to have read-only access to low confidentiality file servers.

  30. The covert channel problem • Reference monitors see only intentional communications channels, such as files, sockets, memory. • However, there are many more “covert channels”, which were neither designed nor intended to transfer information at all. • A malicious high-level program can use these to transmit high-level data to a low-level receiving process, who can then leak it to the outside world. • Examples for covert channels: • Resource conflicts – If high-level process has already created a file F, a low-level process will fail when trying to create a file of same name → 1 bit information. • Timing channels – Processes can use system clock to monitor their own progress and infer the current load, into which other processes can modulate information. • Resource state – High-level processes can leave shared resources (disk head position, cache memory content, etc.) in states that influence the service response times for the next process. • Hidden information in downgraded documents – Steganographic embedding techniques can be used to get confidential information past a human downgrader (least-significant bits in digital photos, variations of punctuation/spelling/whitespace in plaintext, etc.).

More Related