1 / 30

Security Models and Designing a Trusted Operating System

Security Models and Designing a Trusted Operating System. ManikandaN subbu. Trusted Operating System. Functional correctness. Enforcement of integrity. Limited privilege. Appropriate confidence level. Secure Vs Trusted OS. Security Policies.

raleigh
Download Presentation

Security Models and Designing a Trusted Operating System

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Security Models and Designing a Trusted Operating System ManikandaN subbu

  2. Trusted Operating System • Functional correctness. • Enforcement of integrity. • Limited privilege. • Appropriate confidence level.

  3. Secure Vs Trusted OS

  4. Security Policies • A security policy is a statement of the security we expect the system to enforce.

  5. Military Security Policy • Based on protecting classified information. • Information access is limited by the need-to-know rule • Each piece of classified information may be associated with one or more projects, called compartments, describing the subject matter of the information. • In this Model, the key will be defined as <rank , Compartment >

  6. Military Security Policy - 2 • Introduce a relation ≤ , called dominance, on the sets of sensitive objects and subjects. For a subject s and an object o, • s ≤ o , if and only if rank(s) ≤ rank (o) and compartment(s) is a subset compartment(o) • A subject can read an object only if • the clearance level of the subject is at least as high as that of the information and • the subject has a need to know about all compartments for which the information is classified

  7. Commercial Security Policy • public, proprietary, or internal Not all employees need to know about new products! No Security Clearances or Dominance Functions.

  8. Models of Security • Multilevel Security • Want to build a model to represent a range of sensitivities and to reflect the need to separate subjects rigorously from objects to which they should not have access. • What is a Lattice ? • A lattice is a mathematical structure of elements organized by a relation among them, represented by a relation • The dominance relation ≤ defined in the military model is the relation for the lattice. • The relation ≤ is transitive and antisymmetric. • Transitive: If a ≤ b and b ≤ c, then a ≤ c • Antisymmetric: If a ≤ b and b ≤ a, then a = b

  9. Bell LaPadulaConfidentiality Model • Consider a security system with the following properties. • The system covers a set of subjects S and a set of objects O. • Each subject s in S and each object o in O has a fixed security class C(s) and C(o) (denoting clearance and classification level). • The security classes are ordered by a relation ≤. • Simple Security Property. A subject s may have read access to an object o only if C(o) ≤ C(s). • *-Property (called the "star property") A subject s who has read access to an object o may have write access to an object p only if C(o) ≤ C(p).

  10. Bell LaPadulaConfidentiality Model - 2 • The flow of information is generally horizontal (to and from the same level) and upward (from lower levels to higher). • A downward flow is acceptable only if the highly cleared subject does not pass any high-sensitivity data to the lower-sensitivity object. • For computing systems, downward flow of information is difficult because a computer program cannot readily distinguish between having read a piece of information and having read a piece of information that influenced what was later written.

  11. Biba Integrity Model • Biba defines "integrity levels," which are analogous to the sensitivity levels of the Bell LaPadulamodel. • Subjects and objects are ordered by an integrity classification scheme, denoted I(s) and I(o). • Simple Integrity Property. • Subject s can modify (have write access to) object o only if I(s) ≥ I(o) • Integrity *-Property. • If subject s has read access to object o with integrity level I(o), s can have write access to object p only if I(o) ≥ I(p)

  12. Design Elements • First, an operating system controls the interaction between subjects and objects, so security must be considered in every aspect of its design. • Second, because security appears in every part of an operating system, its design and implementation cannot be left fuzzy or vague until the rest of the system is working and being tested. • Least privilege.Each user and each program should operate by using thefewest privileges possible. • Economy of mechanism. The design of the protection system should be small, simple, and straightforward. • Open design. An open design is available for extensive public scrutiny,thereby providing independent confirmation of the design security.

  13. Design Elements • Complete mediation. Every access attempt must be checked. • Permission based. The default condition should be denial of access. A conservative designer identifies the items that should be accessible, rather than those that should not. • Separation of privilege. Ideally, access to objects should depend on more than one condition, such as user authentication plus a cryptographic key. In this way, someone who defeats one protection system will not have complete access. • Least common mechanism. Shared objects provide potential channels forinformation flow. Systems employing physical or logical separation reduce the risk from sharing. • Ease of use. If a protection mechanism is easy to use, it is unlikely to be avoided.

  14. Security features of Ordinary OS

  15. Security features of Ordinary OS – 2 • User authentication. • Memory protection. • File and I/O device access control. • Allocation and access control to general objects. • Enforced sharing. • Guaranteed fair service. • Interprocess communication and synchronization. • Protected operating system protection data.

  16. Security features of Trusted OS

  17. Security features of Trusted OS - 2 • Identification and Authentication • Trusted operating systems require secure identification of individuals, and each individual must be uniquely identified. • Mandatory and Discretionary Access Control • Mandatory access control (MAC) means that access control policy decisions are made beyond the control of the individual owner of an object. • Discretionary access control (DAC) leaves a certain amount of access control to the discretion of the object's owner or to anyone else who is authorized to control the object's access. • Object Reuse Protection To prevent object reuse leakage, operating systems clear (that is, overwrite) all space to be reassigned before allowing the next user to have access to it. • Complete Mediation All accesses must be controlled. • Trusted Path Want an unmistakable communication, called a trusted path, to ensure that they are supplying protected information only to a legitimate receiver.

  18. Security features of Trusted OS - 3 • Accountability and Audit • Accountability usually entails maintaining a log of security-relevant events that have occurred, listing each event and the person responsible for the addition, deletion, or change. This audit log must obviously be protected from outsiders, and every security-relevant event must be recorded. • Audit Log Reduction • Intrusion Detection • Intrusion detection software builds patterns of normal system usage, triggering an alarm any time the usage seems abnormal.

  19. Kernelized Design • The security kernel provides the security interfaces among the hardware, operating system, and other parts of the computing system. • Typically, the operating system is designed so that the security kernel is contained within the operating system kernel • Coverage. Every access to a protected object must pass through the security kernel. • Separation. Isolating security mechanisms both from the rest of the operating system and from the user space makes it easier to protect those mechanisms from penetration by the operating system or the users. • Unity. All security functions are performed by a single set of code, so it is easier to trace the cause of any problems that arise with these functions.

  20. Kernelized Design • Modifiability. Changes to the security mechanisms are easier to make and easier to test. • Compactness. Because it performs only security functions, the security kernel is likely to be relatively small. • Verifiability. Being relatively small, the security kernel can be analyzed rigorously.

  21. Reference Monitor

  22. Reference Monitor - 2 • Tamperproof. Impossible to weaken or disable • Unbypassable. Always invoked when access to any object is required • Analyzable. Small enough to be subjected to analysis and testing, the completeness of which can be ensured

  23. Trusted Computing Base (TCB) • The trusted computing base, or TCB, is the name we give to everything in the trusted operating system necessary to enforce the security policy. • Process activation. • Execution domain switching. Processes running in one domain often invoke processes in other domains to obtain more sensitive data or services. • Memory protection. Because each domain includes code and data stored in memory, the TCB must monitor memory references to ensure secrecy and integrity for each domain. • I/O operation.

  24. Separation/Isolation Memory • physical separation, two different processes use two different hardware facilities. • Temporal separation occurs when different processes are run at different times. • Encryption is used for cryptographic separation • Logical separation, also called isolation, is provided when a process such as a reference monitor separates one user's objects from those of another user. OS Space User 1 space User 2 space . . . User n space

  25. Virtualization • The operating system emulates or simulates a collection of a computer system's resources. • A virtual machine is a collection of real or simulated hardware facilities

  26. Layered Design

  27. Secure File System • Data must be kept secret • Data integrity must be preserved • Data must be kept available

  28. Finally !!! Testing

  29. Any Questions ???

  30. Thank You!!!

More Related