week 8 wednesday n.
Skip this Video
Loading SlideShow in 5 Seconds..
CS363 PowerPoint Presentation
Download Presentation

Loading in 2 Seconds...

play fullscreen
1 / 48

CS363 - PowerPoint PPT Presentation

  • Uploaded on

Week 8 - Wednesday. CS363. Last time. What did we talk about last time? Authentication Challenge response Biometrics Started Bell-La Padula model. Questions?. Project 2. Security Presentation. Yuki Gage. Bell-La Padula Model. Bell-La Padula overview.

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about 'CS363' - judah

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
last time
Last time
  • What did we talk about last time?
  • Authentication
    • Challenge response
    • Biometrics
  • Started Bell-La Padula model
bell la padula overview
Bell-La Padula overview
  • Confidentiality access control system
  • Military-style classifications
  • Uses a linear clearance hierarchy
  • All information is on a need-to-know basis
  • It uses clearance (or sensitivity) levels as well as project-specific compartments
security clearances
Security clearances
  • Both subjects (users) and objects (files) have security clearances
  • Below are the clearances arranged in a hierarchy
simple security condition
Simple security condition
  • Let levelO be the clearance level of object O
  • Let levelS be the clearance level of subject S
  • The simple security condition states that S can read O if and only if the levelO≤ levelS and S has discretionary read access to O
  • In short, you can only read down
  • Example?
  • In a few slides, we will expand the simple security condition to make the concept of level
  • The *-property states that S can write O if and only if the levelS≤ levelO and S has discretionary write access to O
  • In short, you can only write up
  • Example?
basic security theorem
Basic security theorem
  • Assume your system starts in a secure initial state
  • Let T be all the possible state transformations
  • If every element in T preserves the simple security condition and the *-property, every reachable state is secure
  • This is sort of a stupid theorem, because we define “secure” to mean a system that preserves the security condition and the *-property
adding compartments
Adding compartments
  • We add compartments such as NUC = Non-Union Countries, EUR = Europe, and US = United States
  • The possible sets of compartments are:
    • {NUC}
    • {EUR}
    • {US}
    • {NUC, EUR}
    • {NUC, US}
    • {EUR, US}
    • {NUC, EUR, US}
  • Put a clearance level with a compartment set and you get a security level
  • The literature does not always agree on terminology
romaine lattice
Romaine lattice
  • The subset relationship induces a lattice








updated properties
Updated properties
  • Let L be a clearance level and C be a category
  • Instead of talking about levelO≤ levelS, we say that security level (L, C) dominates security level (L’, C’) if and only if L’ ≤ L and C’ C
  • Simple security now requires (LS, CS) to dominate (LO, CO) and S to have read access
  • *-property now requires (LO, CO) to dominate (LS, CS) and S to have write access
  • Problems?
clark wilson model1
Clark-Wilson model
  • Commercial model that focuses on transactions
  • Just like a bank, we want certain conditions to hold before a transaction and the same conditions to hold after
  • If conditions hold in both cases, we call the system consistent
  • Example:
    • D is the amount of money deposited today
    • W is the amount of money withdrawn today
    • YB is the amount of money in accounts at the end of business yesterday
    • TB is the amount of money currently in all accounts
    • Thus,

D + YB – W = TB

clark wilson definitions
Clark-Wilson definitions
  • Data that has to follow integrity controls are called constrained data items or CDIs
  • The rest of the data items are unconstrained data items or UDIs
  • Integrity constraints (like the bank transaction rule) constrain the values of the CDIs
  • Two kinds of procedures:
    • Integrity verification procedures (IVPs) test that the CDIs conform to the integrity constraints
    • Transformation procedures (TPs) change the data in the system from one valid state to another
clark wilson rules
Clark-Wilson rules
  • Clark-Wilson has a system of 9 rules designed to protect the integrity of the system
  • There are five certification rules that test to see if the system is in a valid state
  • There are four enforcement rules that give requirements for the system
certification rules 1 and 2
Certification Rules 1 and 2
  • CR1: When any IVP is run, it must ensure that all CDIs are in a valid state
  • CR2: For some associated set of CDIs, a TP must transform those CDIs in a valid state into a (possibly different) valid state
    • By inference, a TP is only certified to work on a particular set of CDIs
enforcement rules 1 and 2
Enforcement Rules 1 and 2
  • ER1: The system must maintain the certified relations, and must ensure that only TPs certified to run on a CDI manipulate that CDI
  • ER2: The system must associate a user with each TP and set of CDIs. The TP may access those CDIs on behalf of the associated user. If the user is not associated with a particular TP and CDI, then the TP cannot access that CDI on behalf of that user.
    • Thus, a user is only allowed to use certain TPs on certain CDIs
certification rule 3 and enforcement rule 3
Certification Rule 3 and Enforcement Rule 3
  • CR3: The allowed relations must meet the requirements imposed by the principle of separation of duty
  • ER3: The system must authenticate each user attempting to execute a TP
    • In theory, this means that users don't necessarily have to log on if they are not going to interact with CDIs
certification rules 4 and 5
Certification Rules 4 and 5
  • CR4: All TPs must append enough information to reconstruct the operation to an append-only CDI
    • Logging operations
  • CR5: Any TP that takes input as a UDI may perform only valid transformations, or no transformations, for all possible values of the UDI. The transformation either rejects the UDI or transforms it into a CDI
    • Gives a rule for bringing new information into the integrity system
enforcement rule 4
Enforcement Rule 4
  • ER4: Only the certifier of a TP may change the list of entities associated with that TP. No certifier of a TP, or of any entity associated with that TP, may ever have execute permission with respect to that entity.
    • Separation of duties
clark wilson summary
Clark-Wilson summary
  • Designed close to real commercial situations
    • No rigid multilevel scheme
    • Enforces separation of duty
  • Certification and enforcement are separated
  • Enforcement in a system depends simply on following given rules
  • Certification of a system is difficult to determine
chinese wall overview
Chinese Wall overview
  • The Chinese Wall model respects both confidentiality and integrity
  • It's very important in business situations where there are conflict of interest issues
  • Real systems, including British law, have policies similar to the Chinese Wall model
  • Most discussions around the Chinese Wall model are couched in business terms
chinese wall definitions
Chinese Wall definitions
  • We can imagine the Chinese Wall model as a policy controlling access in a database
  • The objects of the database are items of information relating to a company
  • A company dataset (CD) contains objects related to a single company
  • A conflict of interest (COI) class contains the datasets of companies in competition
  • Let COI(O) be the COI class containing object O
  • Let CD(O) be the CD that contains object O
  • We assume that each object belongs to exactly one COI
coi examples
COI Examples

Bank COI Class

Gasoline Company COI Class

Bank of America




Bank of the West


Shell Oil


Standard Oil


Union '76




cw simple security condition
CW-Simple Security Condition
  • Let PR(S) be the set of objects that S has read
  • Subject S can read O if and only if any of the following is true
    • There is an object O' such that S has accessed O' and CD(O') = CD(O)
    • For all objects O', O' PR(S) COI(O')  COI(O)
    • O is a sanitized object
  • Give examples of objects that can and cannot be read
cw property
  • Subject S may write to an object O if and only if both of the following conditions hold
    • The CW-simple security condition permits S to read O
    • For all unsanitized objects O', S can read O'CD(O') = CD(O)
biba overview
Biba overview
  • Integrity based access control system
  • Uses integrity levels, similar to the clearance levels of Bell-LaPadula
  • Precisely the dual of the Bell-LaPadula Model
  • That is, we can only read up and write down
  • Note that integrity levels are intended only to indicate integrity, not confidentiality
  • Actually a measure of accuracy or reliability
formal rules
Formal rules
  • S is the set of subjects and O is the set of objects
  • Integrity levels are ordered
  • i(s) and i(o) gives the integrity level of s or o, respectively
  • Rules:
    • s S can read o  O if and only if i(s) ≤ i(o)
    • s S can write to o  O if and only if i(o) ≤ i(s)
    • s1 S can execute s2  S if and only if i(s2) ≤ i(s1)
  • Rules 1 and 2 imply that, if both read and write are allowed, i(s) = i(o)
  • By adding the idea of integrity compartments and domination, we can get the full dual of the Bell-La Padula lattice framework
  • Real systems (for example the LOCUS operating system) usually have a command like run-untrusted
  • That way, users have to recognize the fact that a risk is being made
  • What if you used the same levels for integrity AND security, could you implement both Biba and Bell-La Padula on the same system?
determining security
Determining security
  • How do we know if something is secure?
  • We define our security policy using our access control matrix
  • We say that a right is leaked if it is added to an element of the access control matrix that doesn’t already have it
  • A system is secure if there is no way rights can be leaked
  • Is there an algorithm to determine if a system is secure?
mono operational systems
Mono-operational systems
  • In a mono-operational system, each command consists of a single primitive command:
    • Create subject s
    • Create object o
    • Enter r into a[s,o]
    • Delete r from a[s,o]
    • Destroy subject s
    • Destroy object o
  • In this system, we could see if a right is leaked with a sequence of k commands
  • Delete and Destroy commands can be ignored
  • No more than one Create command is needed (in the case that there are no subjects)
  • Entering rights is the trouble
  • We start with set S0 of subjects and O0 of objects
  • With n generic rights, we might add all n rights to everything before we leak a right
  • Thus, the maximum length of the command sequence that leaks a right is k ≤ n(|S0|+1)(|O0|+1) + 1
  • If there are m different commands, how many different command sequences are possible?
turing machine
Turing machine
  • A Turing machine is a mathematical model for computation
  • It consists of a head, an infinitely long tape, a set of possible states, and an alphabet of characters that can be written on the tape
  • A list of rules saying what it should write and should it move left or right given the current symbol and state


turing machine example
Turing machine example
  • 3 state, 2 symbol “busy beaver” Turing machine:
  • Starting state A
church turing thesis
Church-Turing thesis
  • If an algorithm exists, a Turing machine can perform that algorithm
  • In essence, a Turing machine is the most powerful model we have of computation
  • Power, in this sense, means the ability to compute some function, not the speed associated with its computation
halting problem
Halting problem
  • Given a Turing machine and input x, does it reach the halt state?
  • It turns out that this problem is undecidable
  • That means that there is no algorithm that can be to determine if any Turing machine will go into an infinite loop
  • Consequently, there is no algorithm that can take any program and check to see if it goes into an infinite loop
simulate a turing machine
Simulate a Turing machine
  • We can simulate a Turing machine using an access control matrix
  • We map the symbols, states and tape for the Turing machine onto the rights and cells of an access control matrix
  • Discovering whether or not the right leaks is equivalent to the Turing machine halting with a 1 or a 0
the bad news
The bad news
  • Without heavy restrictions on the rules for an access control, it is impossible to construct an algorithm that will determine if a right leaks
  • Even for a mono-operational system, the problem might take an infeasible amount of time
  • But, we don’t give up!
    • There are still lots of ways to model security
    • Some of them offer more practical results
next time
Next time…
  • Finish theoretical limitations
  • Trusted system design elements
  • Common OS features and flaws
  • OS assurance and evaluation
  • Taylor Ryan presents
  • Read Sections 5.4 and 5.5
  • Keep working on Project 2
  • Finish Assignment 3