1 / 54

COEN 250

COEN 250. Authorization. Fundamental Mechanisms: Access Matrix. Subjects Objects (Subjects can be objects, too.) Access Rights Example: OS Subjects = Processes Objects = System Resources Access Rights: read, write, execute. Fundamental Mechanisms: Access Matrix. Example: DBMS

alexia
Download Presentation

COEN 250

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. COEN 250 Authorization

  2. Fundamental Mechanisms:Access Matrix • Subjects • Objects (Subjects can be objects, too.) • Access Rights • Example: • OS • Subjects = Processes • Objects = System Resources • Access Rights: read, write, execute

  3. Fundamental Mechanisms:Access Matrix • Example: • DBMS • Subjects = Users • Objects = Relations • Access Rights: retrieve, update, insert, delete

  4. Fundamental Mechanisms:Access Matrix • Access Matrix: • Row for each object • Column for each subject • Entry is a set of access rights. • Later Security Models: • Allow for administrative operations that change the access matrix. • Example: Owner of file can give permissions to others.

  5. Fundamental Mechanisms:Access Matrix • Access Control Lists • ACL for each object. • Lists all the subjects and their rights. • Capabilities • Capability list for each subject. • Contains all the objects and the rights of the subject.

  6. Fundamental Mechanisms:Access Matrix • Authorization Relation • Database table with fields owner, access mode, object. Subject Access Mode Object Bob Owner File 1 Bob Read File 1 Bob Write File 1 Alice Read File 1 Alice Owner File 2 Alice Read File 2 Alice Write File 2 Bob Read File 2 Bob Write File 2

  7. Fundamental Mechanisms:Intermediate Controls • Access matrix too storage intensive • Access matrices make it hard to change policies. • Mechanism 1: Groups • Ideally, all access privileges mediated through group membership. • Negative permissions implement exceptions

  8. Fundamental Mechanisms:Intermediate Control • Protection Rings • Example: • Group processes and system resources into four categories • Operating System Kernel • Operating System • Utilities • User Processes • Access to an object is only granted to a subject of lower level. • Unix only has two levels. • Sometimes protection rings have hardware support.

  9. Fundamental Mechanisms:Security Classes • Each object has a Security class (Security Label) • Denning: • Information Control Policy consists of • Security Classes • “Can flow” relationship • Join operation • Join A  B combines rights and restrictions of both. • US DoD Security Levels • Top Secret • Secret • Confidential • Unclassified

  10. Fundamental MechanismsAccess Control Policies • Discretionary Access Control (DAC) • Specifies authorization solely based on object and subject identity. • Flexible and simple. • Difficult to control information flow. • (Classical) Mandatory Access Control (MAC) • Each user and object has a security level. • Security level reflects trust that user will not pass information to users with lower level clearance. • Access to an object based on security level.

  11. Fundamental MechanismsAccess Control Policies • (Refined) Mandatory Access Control (MAC) • Security Levels and Compartments. • Example: • CRYPTO for cryptographic algorithms. • COMSEC for communication security. • Possible to have top secret clearance in CRYPTO and unclassified clearance in COMSEC • Discretionary policies typical in low security (academic) environments. • Mandatory policies typical in high security (military) environments. • Neither policy adequate for commercial systems.

  12. Fundamental MechanismsAccess Control Policies • Role Based Access Control (RBAC) • Regulate user’s access to information based on the activities the users execute in the system. • “Role” is a set of actions and responsibilities associated with a particular working activity. • Access based on role, not identity of user.

  13. Fundamental MechanismsAccess Control Policies • Role Based Access Control (RBAC) • User authorization is broken into two tasks: • Granting roles to users • Granting rights to roles • Roles can be hierarchical • Engineers inherent employee rights. • User can login with the least privilege for a set of particular tasks. • Roles make it easier to enforce separation of duties: “No single user can subvert the system by herself/himself.”

  14. Covert Channels • A mechanism to circumvent automatic confinement within a security perimeter. • Example: • Person with TOP SECRET clearance runs (inadvertently) Trojan horse. • Trojan horse has free access to files in the compartment. • Trojan horse cannot write down to an unclassified file. • But: Trojan horse can do things that are visible from the outside and thus send contents of TOP SECRET files through a covert channel. • T.H. either runs or waits. System load will vary. Small bandwidth channel. • T.H. can or cannot use shared resources. To send a bit, T.H. fills up the printer line to send 1 bit, or empties it for a 0 bit.

  15. UNIX Woes: SUID programs • Programs can execute the setuid system call. • Executable runs as if executed by user. • Sendmail uses setuid to implement email. • User can cause programs to run as root with input they provide. • Favorite targets of buffer overflow attacks.

  16. Access Control: Details • Static access control matrix: • Easy to evaluate • Easy to reflect security • Can be implemented in a number of ways: • Access Control List • List of Rights • Database • Matrix • Useless in practice because subjects and objects are constantly created. • Therefore: • Need updatable access control matrix

  17. Access Control: Details • Transformation Procedures update Access Control Matrix • Harrison, Ruzzo, Ullman CACM 1975 • Create subject s • Create object o • Enter right into ACM[s,o] • Delete right from ACM[s,o] • Destroy subject s • Destroy subject o

  18. Access Control: Details • Transformation Procedures update Access Control Matrix • Harrison, Ruzzo, Ullman CACM 1975 • System uses these primitives to update ACM • But not directly: Use commands • Some commands are mono-operational • They only involve a single primitive • Most are more complex • Conditional commands

  19. Access Control: Details • Harrison, Ruzzo, Ullman CACM 1975 • Two special rights: • Copy right / Grant right • Allows possessor to grant rights to others, but only those that they also possess • “Change Permission right” in Windows • Own right • Allows possessor to grant right over an object to others • UNIX chown command changes permissions that others have over an object.

  20. Access Control: Details • Principle of Attenuation of Privilege • A subject might not give rights it does not possess to another

  21. Access Control: Details • General Question: • Given a system, how can we determine that it is secure? • Define secure:

  22. Access Control: Details • Definition (Leaking): • When we can add a right through ACM transformations to an element of the ACM that does not have this right, we say that the right has been leaked.

  23. Access Control: Details • ACM is in a given state. • Transformations alter the state. • Definition: • If a system in initial state S0 can never leak the right r, then it is called safe with respect to the right r. Otherwise, it is called unsafe.

  24. Access Control: Details • Results (Harrison, Ruzzo, Ullman) • There exists an algorithm that will determine whether a given mono-operational protection system with initial state S0 is safe with respect to a generic right r. • It is undecidable whether a given state of a given system is safe for a given generic right.

  25. Confidentiality Policies • Confidentiality policy a.k.a Information Flow policy • prevents unauthorized disclosure of information

  26. Bell-LaPadula Model • Combines mandatory and discretionary access controls. • Mandatory access control supersedes discretionary access control. • Only models reads and writes.

  27. Bell-LaPadula Model I • Hierarchical Levels for Objects and Subjects: • Unclassified (UC) – Confidential (C) – Secret (S) – Top Secret (TS) • S can read O if and only if • level(O)  level(S) and • S has discretionary read access to O. • [*property]S can write O • level(O)  level(S) • S has discretionary write access to O

  28. Bell-LaPadula Model I • Example: • To read a secret file, you need to have top secret or secret classification. • To write to a secret file, you cannot have top secret classification. • Rationale: Someone with Secret classification is not allowed to write a file that will be given unclassified classification.

  29. Bell – LaPadula Model II • Expand model by introducing categories • Categories reflect “Need to know” • Example: ComSec, InfoSec

  30. Excurse: Lattices • Security levels do not need to be arranged in a complete ordering • Lattices: Rich enough mathematical structure with a partial ordering.

  31. Excurse: Lattices Totally Ordered Set (left) vs. Lattice (right)

  32. Excurse: Lattices • A partial ordering  on a set S is reflexive, transitive, and antisymmetric. • (S, ) is a total order if for any two elements a, b  S we have • a  b or b  a. • A least upper bound u for a, b in a partially ordered set S has the properties • a  u • b  u •  v  S: [a v and b v]  v  u.

  33. Excurse: Lattices • A greatest lower bound g for a, b in a partially ordered set S has the properties • g  a • g  b •  v  S: [v  a and v  b]  u v.

  34. Excurse: Lattices • A set with a partial ordering is a lattice if any two elements have a least upper bound and a greatest lower bound.

  35. Bell – LaPadula Model II • Model consists of • Set of subjects S • Set of objects O • Set of access operations A = {read, execute, append, write} • Lattice of security levels • Set of security level assignments F.

  36. Bell – LaPadula Model II • An element of F is a triple • maximum security level a subject can have • current security level a subject can have • classification of all objects. • The current security level is smaller or equal to the maximum security level.

  37. Bell – LaPadula Model II • Simple Security Property: • No read-up security policy • * Property • For writes / appends: • Current security level of writer needs to be smaller than the security level of the object • No write-down

  38. Bell – LaPadula Model II • Definition does not allow high-level subjects to write to low level subjects. In this case, either: • Temporarily downgrade writer. • Identify a set of subjects (aka Trusted Subjects), which are permitted to violate the * policy.

  39. Bell – LaPadula Model II • Discretionary Security Policy • An access is only allowed if it is allowed by the discretionary access matrix. • Basic Security Theorem: • If all state transitions in a system are secure and if the initial state is secure then all states of the system are secure.

  40. Bell – LaPadula Model II • Limitations: • BLP can become meaningless if there are state transitions that allow changes of access rights. • BLP only deals with confidentiality • BLP does not address management of access control. • (See Harrison-Ruzzo-Ullman model) • BLP does not prevent covert channels.

  41. Chinese Wall • Chinese Wall model (Brewer & Nash) • Models access rules in a consultancy business • Analysts should not have conflicts of interests: • Alice first helps Client 1, gaining knowledge over a market. • Alice then helps Client 2 with the knowledge gained from helping Client 1

  42. Chinese Wall • Set of subjects S are consultants • Set of companies is C • Set of objects O is items of information concerning a single company • Conflict of interest classes indicate which companies are in competition • Security label of an object is • List of competitors of company

  43. Chinese Wall • Sanitizing • Remove all information from an object that can be used.

  44. Chinese Wall • Chinese Wall rules: • Access is granted only if: • The object belongs to a company dataset already held by the user. • Or: An entirely different conflict of interest class. • Write access is granted only if: • No other object can be read which is in a different company dataset and contains unsanitized information.

  45. Security Kernel • Orange Book • Trusted Computer Security Evaluation Criteria (TCSEC) • yardstick for users to assess the degree of trust that can be placed in a computer system • guidance for manufacturers of computer security systems • basis for specifying security requirements when acquiring a computer security system

  46. Security Kernel • Orange Book Security Divisions: • D – Minimal protection • C1 – Discretionary Security Protection • C2 – Controlled Access Protection • B1 – Labeled Security Protection • B2 – Structured Protection • B3 – Security Domains • A1 – Verified Design

  47. Security Kernel • Computer Systems are designed in layers. • A security mechanism at one layer can be subverted by an attack at a lower level/ • Implementing security mechanisms at lower levels can lead to less performance overhead.

  48. Security Kernel • Orange Book Definitions: • REFERENCE MONITOR:Access control concepts that refers to an abstract machine that mediates all accesses to objects by subjects. • SECURITY KERNEL:Hardware, firmware, software elements of a trusted computing base that implements the reference monitor concept. • TRUSTED COMPUTING BASE: The totality of protection mechanisms within a computer system.

  49. Security Kernel • Users must not be able to modify the operating system. • Users should be able to invoke the OS • Users should not be able to invoke the OS • Tools: • status information • controlled invocation = restricted privilege

  50. Security Kernel • OS needs to distinguish between operations on behalf of the OS and on behalf of a user. • Motorola 68000: One status bit allows to distinguish between user mode and kernel mode. • Intel 80386: Two status bits giving 4 modes. • Example: How to allow processes to switch between root and user level? • SUID, …

More Related