1 / 19

Threat/Attack Model and Policy Specifications Discussion DARPA IA&S Joint PI Meeting

Threat/Attack Model and Policy Specifications Discussion DARPA IA&S Joint PI Meeting . 19 July 2000. Carl Landwehr Senior Fellow Mitretek Systems, Inc. MS Z285 7525 Colshire Dr. McLean VA 22102 (703)610-1576 Carl.Landwehr@mitretek.org. Goals of discussion.

don
Download Presentation

Threat/Attack Model and Policy Specifications Discussion DARPA IA&S Joint PI Meeting

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Threat/Attack Model and Policy Specifications DiscussionDARPA IA&S Joint PI Meeting 19 July 2000 Carl Landwehr Senior Fellow Mitretek Systems, Inc. MS Z285 7525 Colshire Dr. McLean VA 22102 (703)610-1576 Carl.Landwehr@mitretek.org

  2. Goals of discussion • Better understanding of threat/attack models different projects are working with • Better understanding of what protection the current ITS projects will provide • individually • collectively • What is your attack model? • What policies can you enforce?

  3. How can we understand how our systems will behave under attack? • Build it and try it out • penetrate and patch • red teams • Abstract the properties we desire of the system and the kinds of attacks we think likely • Build models to represent system designs and see whether they have the desired properties • Build systems to match the abstractions f s’ s Show: p(s) =>p(f(s’)) f Want: P(S) =>P(f(S)) S S’

  4. Unfortunately … • The process isn’t so tidy • We have to use pre-existing components to build the system • Those components may have properties we don’t want • Or properties we don’t know about (e.g. buffer overflows, etc.)

  5. Standard approach for attacker Attack the assumptions of the system at some level of abstraction: • Thompson compiler attack: assumption that reviewing source code is sufficient • Race condition/TOCTTOU: attacks assumption that execution is linear • Find hidden software interfaces/hardware instructions: attacks assumption that full transition function is known • Replace some hardware: attacks assumption that configuration, initialization are fixed

  6. Anatomy of an attack • 3. Mounting the attack • Exploit identified weakness to gain unauthorized access • Turn off intrusion detection • Execute bad deed • Restore state to look benign • Exit quietly • 4. Possibly iterate step (3) e.g.: • install password sniffer, • time bomb, • DDOS zombie, • Download host list to identify trust relationships. 1. Preparation: • Identify systems of interest • Identify specific targets • Collect information on targets: • Obtain & study specs • Identify/befriend operators/users • Subvert insider • Develop attack plan • Design/build attack 2. Limited Probing • LPI to minimize alerting the targeted system

  7. Models of computer system threat/attack • Trojan Horse program(1972) • Byzantine failures (1982) • Security Flaw taxonomy (NRL, 1994): • purpose to characterize flaws, not attacks • characterize genesis (when), where, what kind • J. Howard Diss. (1997) • Process orientation • Characterize: • attacker, tool, access method, results, objectives

  8. K. Kendall MS thesis - 1998 • LL IDS evaluation • privilege levels: Remote, Local, User, Superuser, Physical • actions: • abusive act at authorized level of privilege • unauthorized level transition • cause (unauth) action at higher level • means • masquerade • bug • misconfiguration • social engineering • actions (results) • probe - machines, services, users • deny - temporary, admin, permanent • intercept - keystrokes, files, net traffic • alter - data, traces of intrusion • use - recreation, further intrusion • Lincoln Lab shorthand: R2L, U2R, DoS, Probe

  9. Vulnerability database(s): • ISS X-force, http://xforce.iss.net/ • NIST ICAT, http://csrc.nist.gov/icat/ • MITRE CVE http://www.cve.mitre.org/

  10. Relation to Intrusion Tolerant Systems • What is the model of an attack (intrusion) for your system? • What is the policy your system, component, technique, aims to enforce? • Characterize the response of your system to an attack • Success for ITS • composition of the systems can withstand the union of the attacks?

  11. What is a policy? • A high-level overall plan embracing the general goals and acceptable procedures of a body (Merriam Webster) • Roots similar to politics • Notion of overall guiding rule • It’s what police enforce?

  12. Object Subject Reference Validation Mechanism Policy (access matrix) Policy vs Mechanism • Distinction with long history in computer security • Reference Monitor: reference validation mechanism tests access against specified policy (Anderson Report, 1972) • Security kernels (e.g., Popek UCLA Data Secure Unix) • Note bureaucratic analog: • DoD Directives set policy, • military departments and agencies write implementing instructions

  13. The old stuff • Confidentiality: Mandatory Access Control • policy: protect sensitive data from being disclosed to those uncleared for it • Bell LaPadula model: model of policy (plus computer system, sort of) • non-interference, restrictiveness, ... • Integrity: • Biba model(s) • Availability? • Yu/Gligor, Millen efforts for modeling DoS • Dependability world • Application level policies: Secure MMS

  14. Theory • Harrison-Ruzzo-Ullman • for a typical access matrix implementation, you can’t tell where a given right may end up (i.e., safety problem undecidable) • Sandhu, etc. • identify some constraints (e.g. typed access matrices) under which problem becomes decidable • Schneider • Security automata - what policies can be enforced by Execution Monitors?

  15. Role Based Access Control • Attempt to provide finer grained access control policies, with better accountability, simpler management • Role organizes a set of permissions; • Users, Subjects, operations, objects, roles, • Example: Adage

  16. Firewall policies • Rules specify policy: • action: permit/deny • basis: from/to (IP addresses), protocol, possibly additional state • Example: Firmato • attempt to specify firewall policy at level above specific firewall rules • concepts: • Role (set of peer capabilities) • Capability: Service (protocol, src/dest port range), peers, direction • Topology (Gateways, Zones (Hostgroups, hosts)

  17. Firmato example goal policy 1. Internal corporate hosts can access all resources on Internet 2. External hosts can access only servers in DMZ 3. External servers can only be updated by web administrator host 4. Firewall gateway interfaces are accessible only from fw_admin host

  18. Practical policies for COTS use? • Viewing/printing a Postscript file should never: cause any file in a directory other than /tmp to be altered • Opening a Word document should never: cause any file to be written without explicit user concurrence? • Applet can read local files as long as it doesn’t send any data over network • Attachment, when invoked, can’t modify/delete any file except in /tmp • Editor controlled by user can access any file, otherwise only files in /tmp • Agent can read only specific row in this spreadsheet and send only to battalion HQ

  19. Discussion • What threats/attacks is your project considering? • What assumptions does your project make? • What policies can your project enforce? • What policies can the collection of projects enforce?

More Related