1 / 65

Deception in Defense of Computer Systems

Deception in Defense of Computer Systems. Neil C. Rowe Center for Information Security Research (CISR) U.S. Naval Postgraduate School Monterey, California www.cs.nps.navy.mil/people/faculty/rowe ncrowe@nps.edu May 2007. Automated deception by software.

symona
Download Presentation

Deception in Defense of Computer Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Deception in Defense of Computer Systems Neil C. Rowe Center for Information Security Research (CISR) U.S. Naval Postgraduate School Monterey, California www.cs.nps.navy.mil/people/faculty/rowe ncrowe@nps.edu May 2007

  2. Automated deception by software • People deceive each other all the time – why can’t software on occasion? • People can justifiably deceive to manipulate other people or avoid hurting them. • We expect software to be an obedient servant. But as software gets smarter, it develops more human characteristics like deception. • Deception could be a third line of defense for computer systems after access controls and intrusion detection. • There are many deceptions -- so it would be hard for an attacker to recognize them all. • Deception can provide graduated responses to the degree of attack. • Many deceptions are cheap to implement by planting fake data and files.

  3. When to use deception? • For defense, not analysis (unlike a honeypot) • To scare away a casual attacker like a hacker • As a temporary harassment against a determined attacker (since delays can be critical) • When simple effects sought by attacker (e.g. denial of service) • Against a hands-on adversary with some intelligence; but automated attacks are brittle to deception • Hence for defense during information warfare

  4. Classic military deception methods (Dunnigan and Nofi, Victory and Deceit, 2001) • concealment (hard for cyberwar since no localization) • camouflage (hard to do given automated protection) • demonstrations (ditto) • feints (not effective since no localization) • ruses (lack surprise) • disinformation (possibly effective but requires work) • lies (can be simple) BEST CYBER • displays (a “show” for an attacker) DEFENSE • insight (analyze attacker to exploit them) METHODS

  5. Rowe's 32 “semantic cases” for deception Space: location-at, location-from, location-to, location-through, direction, orientation Time: time-at, time-from, time-to, time-through, frequency Participant: agent, object, recipient, instrument, beneficiary, experiencer Causality: cause, effect, purpose, contradiction Quality: content, value, measure, order, material, manner, accompaniment Essence: supertype, whole Precondition: external, internal

  6. The best defensive deception methods in cyberspace

  7. Is it ethical for software to deceive? • Most ethical theories consider it ethical to do something bad to prevent something worse. Installing a rootkit on your computer is really bad. • We thus follow utilitarianism. • Commercial software can be deceptive. For instance, Microsoft Windows software deliberately misleads users: • It says the network is down when its own networking software is broken. • It claims it has no way to remove user files. (“Remove” is Unix terminology.) • It reduces the quality of images copied into Word. (This discourages use of third-party image software.)

  8. Simple deception method 1: Delay exaggeration • Under suspicious inputs, a Web server can exaggerate its slowdown. • This can reinforce an attacker’s denial-of-service plan and dissuade them from more vulnerable targets. • Exaggeration can be process-suspension time or additional scripted interaction. • Exaggeration can be proportional to suspiciousness.

  9. Testbed: Image-retrieval Web portal

  10. Example results from the portal

  11. When should this portal be suspicious of a user? • Input has many characters (suggests buffer overflow attempt) • Input buffer has patterns of C code (suggests attempt to insert it) Delay can be done by waiting or by providing fake login windows. Subjects who played with the system were easily fooled.

  12. The fake login windows used

  13. A time exaggeration function faked transaction time T T = e(t) T = t actual transaction time t

  14. Decoy cascades help with delays under denial-of-service attacks When delays are cascaded, decoying effect becomes much stronger – this differentially penalizes the attacker versus legitimate user. Delay 50% Delay 50% Delay 50% Delay 50% Site 1 Site 2 Site 3 Router B Router A

  15. Simple deception 2: Generate random text with grammars (Probability Symbol = Replacement) 0.4 start = "Fatal error at" ~ bignumber ":" ~ errortype 0.3 start = "Error at" ~ bignumber ":" ~ errortype 0.3 start = "Port error at" ~ bignumber ":" ~ errortype 0.5 bignumber = digit digit digit digit digit digit digit digit digit 0.5 bignumber = digit digit digit digit digit digit digit digit 0.5 bignumber = digit digit digit digit digit digit digit 0.1 digit = 0 0.1 digit = 1 0.1 digit = 2 0.1 digit = 3 0.1 digit = 4 0.1 digit = 5 0.1 digit = 6 0.1 digit = 7 0.1 digit = 8 0.1 digit = 9 1.0 errortype = "Segmentation fault" 1.0 errortype = "Illegal type coercion" 1.0 errortype = "Syntax error" 1.0 errortype = "Attempt to access protected memory" 1.0 errortype = "Process limit reached" 1.0 errortype = "Not enough main memory" 1.0 errortype = "Stack inconsistent" 1.0 errortype = "Attempted privilege escalation" Example generated strings: Port error at 986827820: Process limit reached Fatal error at 4950426: Illegal type coercion Fatal error at 135642407: Syntax error Error at 3601744: Process limit reached Fatal error at 25882486: Segmentation fault Error at 0055092: Attempted privilege escalation Port error at 397796426: Illegal type coercion Port error at 218093596: Not enough main memory

  16. Output from a grammar for fake directory listings canis> java CFGen Directory.gram Volume in drive C has no label. Volume Serial Number is 005B-52C0. 10/29/90 13:18 43180 ea-sin23.exe 06/07/02 12:08 44739898 yrz35.doc 12/10/98 02:34 1899 0gm.doc 11/21/98 12:31 55461 eoso8.doc 05/12/94 22:08 1157665 ae.exe 12/14/99 10:01 620125 uottr.doc 07/20/90 13:00 173 oab.ppt 07/21/01 18:59 95832163 ppjh.sys 11/20/02 20:52 1752 nen.exe 10/24/00 19:27 5437406 ved.eaoehudiaeelpio662.exe 12/29/92 21:22 558139 yoyd4.dll 11/10/00 22:15 6684313 eareterie.doc 07/06/01 20:18 6508922 ni387.bin 04/27/95 07:57 33476 oorasix%eapirehal.sys 12/29/96 23:47 1973072 ttwehtisii.sys 100 Files 05148304 bytes 1 Dir(s) 446543464 bytes

  17. Simple deception method 3: Create fake file directories from names in existing systems plus random data

  18. Modify directory paths randomly to intruigue spies This is claimed to be /root/code05/WAT/Policy/old_pages/ 23_AdvancesNonPhotoRealisticRendering/ma/Locations/NavalPostgraduateSchool/images/aries_thumbnail.html

  19. Deception theory in the rest of the talk • Software wrappers and decoy control lists • Decision-theoretic analysis of when to deceive • Analysis of logical consistency with deceptions • Counterplanning for attack plans • Experiments with deception on our honeypot

  20. Wrappers: A general approach to deception • For convincing deceptions, we may need to consistently modify many software modules. • A general solution is to apply “wrapper” code around key chunks of software. • Wrappers would evaluate the conditions, decide whether to deceive, and (occasionally) implement deceptions. • Wrappers could be controlled by a "deception policy" analogous to an access-control policy.

  21. General deception-wrapper architecture Attacker Operating system Applications software 1 Applications software 2 Wrapper Wrapper Component 1 Component 2 Component 3 Component 4 Intrusion-detection system Deception supervisor Deception rules

  22. Example “Deception Control List”

  23. Will deception hurt legitimate users?

  24. Analyzing the decision tree With this decision tree, deception is cost-effective in the presence of legitimate users if: when:

  25. Best attacker strategy for honeypots and fake honeypots Conclusion: Fake honeypots are not worth testing unless they are quite frequent.

  26. Is anticipatory deception desirable?

  27. Distrust is not the opposite of trust • People act as if trust is not symmetric with distrust. Trust fluctuates up and down with experience. Distrust just increases with every distrustful act. • Our approach: Probability that a user detects our particular deception is proportional to CMB, where C is the prior probability of the deception event, M is the degree of maliciousness of the user, and B is the probability they believe we are deceiving them. • Then probability we are fooling a user is the weighted sum of all the probabilities that the user detected our deception.

  28. Inconsistency is excusable over time • An excuse like “network down” need not always be consistent, because the network could be fixed in the meantime. • In general, we can use a Poisson model, where λis the number of times that the condition D would change in a unit time. • Then if we reported to the user that D was true at some time, the probability that D is still true at a time t units later is .

  29. Logical consistency of deceptions Lies about computer systems should be consistent in assertions about resources: • The directories and files of the computer; • Peripheral devices to which the computer is attached; • Networks to which the computer is attached; • Other sites accessible by the networks; • The executables for the commands run by an operating system; • Status such as "logged-in" and "administrator privileges“. For each resource, identify six facets of its status: Existence, Authorization, Readiness, Operability, Compatibility, and Moderation

  30. Predicate calculus formulation of resource facets • The “moderate” facet applies to any parameter, including: • Files: size, authorization level • Peripheral devices: size, bandwidth • Networks: size, bandwidth • Sites: load, number of users • Passwords: length, number of times used

  31. Example of what logical consistency requires Suppose Bob downloads "foobar.doc" of size 50,000 bytes from "remotesite" to "homesite" across network "localnet" via the FTP file-transfer utility on homesite, at a time when localnet has five simultaneous users already. Then: • File systems on remotesite and homesite exist, are authorized access by Bob, are initialized for access, and are working.. • The network localnet exists, is authorized use by Bob, is initialized for file transfers, and is working. • Localnet is compatible with remotesite and homesite. • Executable ftp exists on homesite, Bob is authorized to use it, it is initialized, and it is working. • Executable ftp is compatible with the file system on homesite. • Executable ftp is compatible with localnet. • Executable ftp is compatible the file system on remotesite. • The file system on homesite can hold files of 50,000 bytes. • Localnet can transfer files of 50,000 bytes and handle six simultaneous users.

  32. Additional useful logical inferences

  33. Logical criteria for when to deceive in a plan • Besides the logical consistency conditions mentioned earlier, use some deception pragmatics. • For instance, if the attacker downloads a rootkit: • Don’t deceive immediately by saying the system cannot – downloading is a key attack step so interfering with it looks suspicious. • Instead, wait until the attacker tries to decompress it, then pretend the decompression software is faulty.

  34. Some deception tactics Suppose you want to do an action X against your enemy. • Stealth: Do X but don’t reveal it. • Excuse: Do X and give a false excuse why. • Equivocation: Do X and give a correct but misleading reason why. • Outright lying: Do X but claim you didn’t. • Overplay: Do deception Y ostentatiously to conceal deception X. • Reciprocal: Encourage enemy to lie to you about Y to make it easier for you to lie about X.

  35. Rating deception opportunities Use as factors: • A priori likelihood of lack of problems with the associated facet of availability; • A priori likelihood of lack of problems with the associated resource; • Whether the resource is created (which makes resource denial more plausible); and • Suspiciousness of the associated command (it increases deception desirability).

  36. Example output of a logical deception planner • For command ftp(hackerhome,patsy): Abort execution of site(hackerhome) with an error message. [weight 0.275] • For command overflow(buffer,port80,patsy): Abort execution of buffer(port80) with an error message. [weight 0.265] • For command overflow(buffer,port80,patsy): Abort execution of system(patsy) with an error message. [weight 0.265] • For command overflow(buffer,port80,patsy): Abort execution with an error message about buffer_of(patsy,port80) . [weight 0.267] • For command ftp(hackerhome,patsy): Lie by saying the user is not authorized for site(hackerhome). [weight 0.260] • For command ftp(hackerhome,patsy): Lie by saying credentials cannot be confirmed for site(hackerhome). [weight 0.260] • For command overflow(buffer,port80,patsy): Lie by saying the user is not authorized for buffer(port80). [weight 0.251]

  37. MECOUNTER, our tool for counterplanning • Uses methods from artificial intelligence • Supports multi-agent simulations with goals and capabilities for each agent • Uses hierarchical planning for both planner and counterplanner • Models communications and incomplete knowledge • Reasons about conflicts between actions • Supports action probabilities

  38. Example simple cyber-attack plan for a rootkit install rootkit obtain admin status install secure port X server download rootkit cause buffer overflow in port X download port X upgrade test rootkit connect to target machine on port X decompress rootkit ftp to hacker archive ftp to port X site decompress rootkit logout close ftp connection scan local network for ports with known vulnerabilities close ftp connection guess password of account on target machine learn local network topology login as admin check for newly discovered vulnerabilities of common software logout

  39. The test example: A rootkit installation plan • Models 98 possible actions in 19 categories • Models 115 possible facts for states (and each can be negated) • Models communication between system and user by “order” and “report” actions • Permits 13 random events with actions • Defines 3072 starting states • Goals: Install rootkit and backdoor, then log out • A typical plan needs 50 actions to get from starting state to goal

  40. Example specifications for a cyber-attack start_state([file(rootkit,hackerhome), file(secureport,hackerhome), compressed(rootkit,hackerhome), compressed(secureport,hackerhome)]). goal(hacker,[installed(rootkit,patsy), tested(rootkit,patsy), installed(secureport,patsy), not(logged_in(_,_)), not(connected_at_port(_,_)), not(ftp_connection(_,_))]). recommended([installed(Executable,Target)],install(Executable,Target)). recommended([status(admin,Target)],get(admin,status,Target)). precondition(install(Executable,Target), [status(admin,Target), logged_in(admin,Target), file(Executable,Target), not(compressed(Executable,Target)), not(ftp_connection(Local,Target))]). precondition(get(admin,status,Target), [overflowed(buffer,X,Target)]). deletepostcondition(install(Executable,Target), []). deletepostcondition(get(admin,status,Target), [status(_,Target)]). addpostcondition(install(Executable,Target), [installed(Executable,Target)]). addpostcondition(get(admin,status,Target), [status(admin,Target)]). duration(Op,Agent,State,D,D2) :- durationmean(Op,M), skill(Agent,S), D is M/S, D2 is D*0.75. durationmean(install(Executable,Target), 20).

  41. Deception ploys in counterplans • “Atomic” ploys can add, delete, or modify single facts in states. • Ploys must increase the difficulty of executing a plan. • Ploys can imply other ploys. • A counterplan is a set of atomic ploys at specified times. • Measure effectiveness as expected increase in time to execute the plan (we can only delay attackers: Δc = c(S,K) + c(K,G) – c(S,G) where • c(S,K) is cost to return to a known state • c(K,G) is cost to goal of known state • c(S,G) is cost to goal of state to which the ploy was applied

  42. Example ploy: Delete admin authorization + log out 167 research vulnerabilities overflow buffer become admin close port 5 ping scan open port 180 96 0 97 0.9 0.8 22 1 2 3 4 6 99 0.1 0.2 ping 107 33 research vulnerabilities 101 100 close port 35 become admin close ftp 0.83 10 decompress rootkit ftp 65 9 100 75 login download secureport 31 11 download rootkit 61 0.2 70 74 close ftp 0.17 25 ftp 8 7 62 install rootkit 26 0.4 decompress secureport download secureport download secureport download rootkit 85 24 86 0.8 decompress rootkit 0.8 74 55 decompress rootkit 122 29 0.2 48 32 50 0.8 27 30 decompress secureport 0.75 0.2 0.25 install rootkit install rootkit test rootkit 0.67 0.9 0.17 34 13 21 22 23 28 download secureport close ftp install secureport decompress secureport 11 47 39 36 30 40 0.33 ftp test rootkit test rootkit test rootkit 0.83 test rootkit test rootkit 14 15 16 17 18 19 36 24 1 0 25 download secureport close ftp install secureport logout decompress secureport 0.6 51 open port 102 55 login 103 0.9 106 overflow buffer close port 104 0.1 0.1 105 become admin 20

  43. Efficient ploy evaluation • The same fixplan (to remedy a ploy) can apply to each of a sequence of states via “backward temporal suitability inheritance”. • The same fixplan can inherit downwards to subtasks. • For rootkit example, we did 500 simulation runs to get 10,276 states; there were 70 ploys, for 616,560 ploy-to-state matches. • Only 18,835 were found useful after pruning; fixplans averaged 6.6 steps.

  44. A best ploy set (via a greedy algorithm)

  45. “Generic excuses” for counterplanning • One broad false excuse is more convincing than multiple excuses. • Example such “generic excuses”: • the network is down • the file system is messed up • “you broke something” • the system is being tested • security policy changed • communications defaults have been changed • a practical joker is operating • the system was compromised by a hacker

  46. Compatibility (0-10) of deception type with generic excuse

  47. Compatibility (0-10) of action with generic excuse

  48. Example: IS=intrinsic suspiciousness, CS=cumulative

  49. A testbed for deception in cyberspace • To be scientific, information assurance needs to do experiments. • The best experiments are against real attackers. • We have built a modified high-interaction honeypot to study reactions to our defensive deception methods. • It tries various deceptions and reports what happens.

  50. Our honeypot lab

More Related