1 / 50

Computer Security Security models – an overview

Computer Security Security models – an overview. State Machine Models. Automata (=State Machines) are a popular way of modeling many aspects of computing systems. The essential feature of these are then concepts of: State State transition. Bell-LaPadula (BLP) Model. BLP Structure

eli
Download Presentation

Computer Security Security models – an overview

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computer SecuritySecurity models – an overview

  2. State Machine Models Automata (=State Machines) are a popular way of modeling many aspects of computing systems. The essential feature of these are then concepts of: • State • State transition

  3. Bell-LaPadula (BLP) Model BLP Structure Combines, • .access permission matrices for access control, • a security lattice, for security levels, • an automaton, for access operations. Security policies are reduced to relations in the BLP structure.

  4. BLP Model • A set of subjectsS • A setof objectsO • A set of accessoperations A = {execute,read,append,write} • A set L of security levels, with a partial ordering .

  5. BLP Model We want to use the state of the system to check its security. The state set is: B x M X F, where • B = P (SxOxA) the set of current accesses • .M is the set of permission matrices M • A set of securitylevel assignmentsF Lsx Ls x Lo, with elements , where

  6. BLP Model Security policies:a state (b,M,f ) must satisfy, • Simple security property (ss-property): • for each access (s,o,a) in b, with access operation a = read, or write, the security level of s must dominate the classification of o, i.e., This is a no read/write up security policy

  7. BLP Model 2. Star property (*-property): • for each access (s,o,a) in b, with access operation a= append, or write, the current security level of s is dominated by the classification of the object o, i.e. . This is a no append/write down policy. Also, if there is an (s,o,a) in b with a = append or write,

  8. BLP Model 3. Discretionary security property (ds-property): - for each access (s,o,a) in b, we must have .

  9. BLP Model The*-property implies that is not possible to send messages to low level subjects. There are two ways to remedy this. • Temporarily downgrade a high level subject this is why we introduced the current security level . • Identify a set of subjects that are permitted to violate the *-property . These are called trusted subjects.

  10. BLP Security • A state v = (b,M,f) is called secure if all three security properties are satisfied. • A transition from state v1 = (b1,M1,f1) to state v2 = (b2,M2,f2) is secure if v2 is secure whenever v1 is.

  11. BLP Security Theorem If all state transitions are secure and if the initial state is secure then every subsequent state is secure, no matter which inputs occur.

  12. BLP Security Proof -informal A formal proof would proceed by induction on the length of the input sequences. It would build on the fact that security is preserves by state transitions. Remark This theorem means that to check security you only need to check that state transitions preserve security.

  13. BLP Security Proof, the ss-property A state transition from (b1,M1,f1) to (b2,M2,f2) preserves the ss-property if and only if:

  14. BLP Security Proof, the * & ds-property Preservation of the *-property and the ds-property can be described in a similar way.

  15. BLP Security McLean defined a BLP system which • downgrades all subjects to the lowest level, • downgrades all objects to the lowest level, • enters all access rights in all positions of the access control matrix M.

  16. BLP Security A BLP system is only as good as its state transitions are. When a security system is designed within the framework of a model, it is important that the implementation of the primitives of the model captures correctly the security requirements of the system.

  17. Limitations of BLP The BLP model, • only deals with confidentiality, not integrity, • does not address management of access control, • contains covert channels.

  18. Limitations of BLP • These are features of BLP, and should not be regarded as flaws. • Limiting the goals of a model makes it easier to deal with security issues • BLP has no policies for the modification of access rights. BLP was originally intended for systems with no changes in the security levels.

  19. Limitations of BLP Covert channels are information channels that are not controlled by the security mechanism of the system. Information can flow (leak) from a high security level to a low security level as follows: • A low-level subject creates an object dummy.obj at its own level, • Its high-level accomplice, either upgrades it to a high level, or does not. 3. Later the low level subject tries to read dummy.obj . If it can, the covert bit is 1; otherwise it is 0.

  20. Limitations of BLP Telling a subject that a certain operationis or is not permitted, constitutes information flow.

  21. The Harrison-Ruzzo-Ullman (HRU) Model The BLP model is not dynamic: it does not allow for the creation or deletion of subjects and objects and for changing access rights. The HRU model defines authorization systems.

  22. HRU Model • A set of subjectsS • A set of objectsO • A set of access rightsR • An access matrix

  23. HRU Model We also have six primitive operations,

  24. HRU Model Commands in HRU are of type,

  25. HRU Model Basic operations are of type, The owner s of file f grants read access to s’,

  26. HRU Model -security Definitions • An access matrix Mleaks the right r if there is a command c:MM’ that adds the right r in a position of M that did not previously contain r, that is, • An access matrix Mis safe with respect to the right r if no sequence of commands can transformMinto a state that leaksr. So, verifying HRU security reduces to verifying safety properties.

  27. HRU Model -security Theorems • Given an access matrix M and a right r, verifying the safety of M with respect to the right r is an undecidable problem. • Given a mono-operational authorization system, an access matrix M and a right r, verifying thesafety of M with respectto the right r is decidable. Even if two operations per command are allowed we get undecidability. 3. The safety problem for authorization systems is decidable if the number of subjects is bounded.

  28. The Chinese Wall Model This models a consultancy business where analysts have to make sure that no conflicts arise when dealing with different clients (companies). Informally, a conflict arises when clients are direct competitors in the same market, or because of the ownership of companies.

  29. The Chinese Wall Model • A set of subjectsS • A set of companiesC • A set of objectsO • the objects concerning the same company are called company datasets. • the function gives the company dataset for each object. • The function gives the conflict of interest classes for each object.

  30. The Chinese Wall Model Conflicts of interest may also arise from objects that have been accessed in then past. Let Ns,o= true, if subject s has had access to object o false, if subject s never had access to object o

  31. The Chinese Wall Model • ss – property: • That is, access is granted only if the object requested • belongs to: • a company dataset already held by the subject (the analyst), or • an entirely different conflict of interest class.

  32. The Chinese Wall Model * - property: That is, write access to an object is only granted if no other object can be read which is in a different company dataset and contains unsanitised information.

  33. The Biba Model This model addresses integrity by using a state machine model in a similar way to BLP. Unlike BLP there is no single high-level integrity policy

  34. The Biba Model • The integrity policies guarantee that information only flows downwards. • In particular, “clean” high level entities cannot be corrupted by “dirty” low level entities.

  35. The Biba Model – static integrity Simple integrity property : Integrity * - property : These properties prevent clean subjects and objects from being contaminated by dirty information.

  36. The Biba Model – dynamic integrity This uses an approach similar to the Chinese Wall model, in which the integrity of a subject is adjusted if the subject comes into contact with low-level information.

  37. The Biba Model – dynamic integrity Subject low watermark property: Object low watermark property:

  38. The Clark - Wilson Model This model addresses the security requirements of commercial applications. The requirements of this model are to secure data integrity. Integrity requirements are divided into, • internal consistency: properties of the internal state that can be enforced by the computer system. • external consistency: the relation of the internal state to the real world: enforced by means outside the system, e.g. auditing.

  39. The CW Model Integrity is enforced by, • well formed transactions: data items can be manipulated only by a specific set of programs; users have access to programs rather than data items. • separation of duties: users have to collaborate to manipulate data and collude to penetrate the system.

  40. The CW Model In the Clark-Wilson model, • Subjects must be identified and authenticated, • Objects can be manipulated only by a restricted set of programs, • Subjects can executeonly a restricted set of programs, • A proper audit log has to be maintained, • The system must be certified to work properly.

  41. The CW Model In the Clark-Wilson model, • Data items governed by the security policy are called Constrained Data Items (CDIs), • Inputs captured as Unconstrained Data Items (UDIs), • Conversion of UDIs to CDIs is a critical part of the system which cannot be controlled solely by the security mechanisms in the system, • CDIs can be manipulated by Transformation Procedures (TPs) • The integrity of a state is checked by Integrity Verification Procedure (IVPs)

  42. The CW Model Security procedures are defined by 5 certification rules: • IVPs must ensure that all CDIs are in a valid state when the IVP is run. • TPs must be certified to be valid, i.e. CDIs must always be transformed into CDIs. • The access rules must certify any separation of duties requirements. 4. All TPs must write to an append-only log. 5. Any TP that takes a UDI as input must either convert it into a CDI or reject it.

  43. The CW Model Security procedures are enforced by the 4 rules: • The system must maintain and protect the list of entries: (TPi:CDIa,CDIb, … ) giving the CDIs that the TP is certified to access. • The system must maintain and protect the list of entries: (UserID,TPi:CDIa,CDIb, … ) specifying the TPs that users can execute. • The system must authenticate each user requesting to execute a TP. • Only a subject that may certify an access rule for a TP may modify the respective entry in the list. This subject must not have execute rights on that TP.

  44. Information - Flow Models In the BLP model information can flow from a high security level to a low security level through access operations. Informally, a state transition causes information flow from one object x to another object y,if we learn more about x by observing y.

  45. Information - Flow Models If you already know x then no information can flow from x. Otherwise we have: • Explicit information flow -- Observing y after the assignment y:=x tells you the value of x. • Implicit information flow -- Observing y after the conditional statement • If x=0 then y:=1 may tell you something about the x even if y:=1 had not been executed (e.g. if y = 2 then x is not 0).

  46. Information - Flow Models A precise quantitative definition for information flow can be given in terms of Information Theory. • The information flow from x to y is measured by the equivocation(conditional entropy) H(x | y) of x, given y.

  47. Information - Flow Models The components of the information flow model are: • A lattice • A set of labeled objects

  48. Information - Flow Models An IF system is secure if there is no illegal information flow. • Advantages: it covers all kinds of information flow. • Disadvantages: far more difficult to design such systems. E.g. checking whether a given system in the IF model is secure is an undecidable problem.

  49. Information - Flow Models One must also distinguish between • static enforcement and • dynamic enforcement of the information flow policies.

  50. Information - Flow Models An alternative to information flow models are • non-interference models. These provide a different formalism to describe the knowledge of subjects regarding the state of the system.

More Related