deception in defense of computer systems n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Deception in Defense of Computer Systems PowerPoint Presentation
Download Presentation
Deception in Defense of Computer Systems

Loading in 2 Seconds...

play fullscreen
1 / 65

Deception in Defense of Computer Systems - PowerPoint PPT Presentation


  • 101 Views
  • Uploaded on

Deception in Defense of Computer Systems. Neil C. Rowe Center for Information Security Research (CISR) U.S. Naval Postgraduate School Monterey, California www.cs.nps.navy.mil/people/faculty/rowe ncrowe@nps.edu May 2007. Automated deception by software.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Deception in Defense of Computer Systems' - symona


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
deception in defense of computer systems

Deception in Defense of Computer Systems

Neil C. Rowe

Center for Information Security Research (CISR)

U.S. Naval Postgraduate School

Monterey, California

www.cs.nps.navy.mil/people/faculty/rowe

ncrowe@nps.edu

May 2007

automated deception by software
Automated deception by software
  • People deceive each other all the time – why can’t software on occasion?
  • People can justifiably deceive to manipulate other people or avoid hurting them.
  • We expect software to be an obedient servant. But as software gets smarter, it develops more human characteristics like deception.
  • Deception could be a third line of defense for computer systems after access controls and intrusion detection.
  • There are many deceptions -- so it would be hard for an attacker to recognize them all.
  • Deception can provide graduated responses to the degree of attack.
  • Many deceptions are cheap to implement by planting fake data and files.
when to use deception
When to use deception?
  • For defense, not analysis (unlike a honeypot)
  • To scare away a casual attacker like a hacker
  • As a temporary harassment against a determined attacker (since delays can be critical)
  • When simple effects sought by attacker (e.g. denial of service)
  • Against a hands-on adversary with some intelligence; but automated attacks are brittle to deception
  • Hence for defense during information warfare
classic military deception methods
Classic military deception methods

(Dunnigan and Nofi, Victory and Deceit, 2001)

  • concealment (hard for cyberwar since no localization)
  • camouflage (hard to do given automated protection)
  • demonstrations (ditto)
  • feints (not effective since no localization)
  • ruses (lack surprise)
  • disinformation (possibly effective but requires work)
  • lies (can be simple) BEST CYBER
  • displays (a “show” for an attacker) DEFENSE
  • insight (analyze attacker to exploit them) METHODS
rowe s 32 semantic cases for deception
Rowe's 32 “semantic cases” for deception

Space: location-at, location-from, location-to, location-through, direction, orientation

Time: time-at, time-from, time-to, time-through, frequency

Participant: agent, object, recipient, instrument, beneficiary, experiencer

Causality: cause, effect, purpose, contradiction

Quality: content, value, measure, order, material, manner, accompaniment

Essence: supertype, whole

Precondition: external, internal

is it ethical for software to deceive
Is it ethical for software to deceive?
  • Most ethical theories consider it ethical to do something bad to prevent something worse. Installing a rootkit on your computer is really bad.
  • We thus follow utilitarianism.
  • Commercial software can be deceptive. For instance, Microsoft Windows software deliberately misleads users:
    • It says the network is down when its own networking software is broken.
    • It claims it has no way to remove user files. (“Remove” is Unix terminology.)
    • It reduces the quality of images copied into Word. (This discourages use of third-party image software.)
simple deception method 1 delay exaggeration
Simple deception method 1: Delay exaggeration
  • Under suspicious inputs, a Web server can exaggerate its slowdown.
  • This can reinforce an attacker’s denial-of-service plan and dissuade them from more vulnerable targets.
  • Exaggeration can be process-suspension time or additional scripted interaction.
  • Exaggeration can be proportional to suspiciousness.
when should this portal be suspicious of a user
When should this portal be suspicious of a user?
  • Input has many characters (suggests buffer overflow attempt)
  • Input buffer has patterns of C code (suggests attempt to insert it)

Delay can be done by waiting or by providing fake login windows.

Subjects who played with the system were easily fooled.

a time exaggeration function
A time exaggeration function

faked transaction time T

T = e(t)

T = t

actual transaction time t

decoy cascades help with delays under denial of service attacks
Decoy cascades help with delays under denial-of-service attacks

When delays are cascaded, decoying effect becomes much stronger – this differentially penalizes the attacker versus legitimate user.

Delay 50%

Delay 50%

Delay 50%

Delay 50%

Site 1

Site 2

Site 3

Router B

Router A

simple deception 2 generate random text with grammars
Simple deception 2: Generate random text with grammars

(Probability Symbol = Replacement)

0.4 start = "Fatal error at" ~ bignumber ":" ~ errortype

0.3 start = "Error at" ~ bignumber ":" ~ errortype

0.3 start = "Port error at" ~ bignumber ":" ~ errortype

0.5 bignumber = digit digit digit digit digit digit digit digit digit

0.5 bignumber = digit digit digit digit digit digit digit digit

0.5 bignumber = digit digit digit digit digit digit digit

0.1 digit = 0

0.1 digit = 1

0.1 digit = 2

0.1 digit = 3

0.1 digit = 4

0.1 digit = 5

0.1 digit = 6

0.1 digit = 7

0.1 digit = 8

0.1 digit = 9

1.0 errortype = "Segmentation fault"

1.0 errortype = "Illegal type coercion"

1.0 errortype = "Syntax error"

1.0 errortype = "Attempt to access protected memory"

1.0 errortype = "Process limit reached"

1.0 errortype = "Not enough main memory"

1.0 errortype = "Stack inconsistent"

1.0 errortype = "Attempted privilege escalation"

Example generated strings:

Port error at 986827820: Process limit reached

Fatal error at 4950426: Illegal type coercion

Fatal error at 135642407: Syntax error

Error at 3601744: Process limit reached

Fatal error at 25882486: Segmentation fault

Error at 0055092: Attempted privilege escalation

Port error at 397796426: Illegal type coercion

Port error at 218093596: Not enough main memory

output from a grammar for fake directory listings
Output from a grammar for fake directory listings

canis> java CFGen Directory.gram

Volume in drive C has no label.

Volume Serial Number is 005B-52C0.

10/29/90 13:18 43180 ea-sin23.exe

06/07/02 12:08 44739898 yrz35.doc

12/10/98 02:34 1899 0gm.doc

11/21/98 12:31 55461 eoso8.doc

05/12/94 22:08 1157665 ae.exe

12/14/99 10:01 620125 uottr.doc

07/20/90 13:00 173 oab.ppt

07/21/01 18:59 95832163 ppjh.sys

11/20/02 20:52 1752 nen.exe

10/24/00 19:27 5437406 ved.eaoehudiaeelpio662.exe

12/29/92 21:22 558139 yoyd4.dll

11/10/00 22:15 6684313 eareterie.doc

07/06/01 20:18 6508922 ni387.bin

04/27/95 07:57 33476 oorasix%eapirehal.sys

12/29/96 23:47 1973072 ttwehtisii.sys

100 Files 05148304 bytes

1 Dir(s) 446543464 bytes

slide17
Simple deception method 3: Create fake file directories from names in existing systems plus random data
modify directory paths randomly to intruigue spies
Modify directory paths randomly to intruigue spies

This is claimed to be /root/code05/WAT/Policy/old_pages/ 23_AdvancesNonPhotoRealisticRendering/ma/Locations/NavalPostgraduateSchool/images/aries_thumbnail.html

deception theory in the rest of the talk
Deception theory in the rest of the talk
  • Software wrappers and decoy control lists
  • Decision-theoretic analysis of when to deceive
  • Analysis of logical consistency with deceptions
  • Counterplanning for attack plans
  • Experiments with deception on our honeypot
wrappers a general approach to deception
Wrappers: A general approach to deception
  • For convincing deceptions, we may need to consistently modify many software modules.
  • A general solution is to apply “wrapper” code around key chunks of software.
  • Wrappers would evaluate the conditions, decide whether to deceive, and (occasionally) implement deceptions.
  • Wrappers could be controlled by a "deception policy" analogous to an access-control policy.
general deception wrapper architecture
General deception-wrapper architecture

Attacker

Operating system

Applications software 1

Applications software 2

Wrapper

Wrapper

Component 1

Component 2

Component 3

Component 4

Intrusion-detection system

Deception supervisor

Deception rules

analyzing the decision tree
Analyzing the decision tree

With this decision tree, deception is cost-effective in the presence of legitimate users if:

when:

best attacker strategy for honeypots and fake honeypots
Best attacker strategy for honeypots and fake honeypots

Conclusion: Fake honeypots are not worth testing unless they are quite frequent.

distrust is not the opposite of trust
Distrust is not the opposite of trust
  • People act as if trust is not symmetric with distrust. Trust fluctuates up and down with experience. Distrust just increases with every distrustful act.
  • Our approach: Probability that a user detects our particular deception is proportional to CMB, where C is the prior probability of the deception event, M is the degree of maliciousness of the user, and B is the probability they believe we are deceiving them.
  • Then probability we are fooling a user is the weighted sum of all the probabilities that the user detected our deception.
inconsistency is excusable over time
Inconsistency is excusable over time
  • An excuse like “network down” need not always be consistent, because the network could be fixed in the meantime.
  • In general, we can use a Poisson model, where λis the number of times that the condition D would change in a unit time.
  • Then if we reported to the user that D was true at some time, the probability that D is still true at a time t units later is .
logical consistency of deceptions
Logical consistency of deceptions

Lies about computer systems should be consistent in assertions about resources:

  • The directories and files of the computer;
  • Peripheral devices to which the computer is attached;
  • Networks to which the computer is attached;
  • Other sites accessible by the networks;
  • The executables for the commands run by an operating system;
  • Status such as "logged-in" and "administrator privileges“.

For each resource, identify six facets of its status:

Existence, Authorization, Readiness, Operability, Compatibility, and Moderation

predicate calculus formulation of resource facets
Predicate calculus formulation of resource facets
  • The “moderate” facet applies to any parameter, including:
  • Files: size, authorization level
  • Peripheral devices: size, bandwidth
  • Networks: size, bandwidth
  • Sites: load, number of users
  • Passwords: length, number of times used
example of what logical consistency requires
Example of what logical consistency requires

Suppose Bob downloads "foobar.doc" of size 50,000 bytes from "remotesite" to "homesite" across network "localnet" via the FTP file-transfer utility on homesite, at a time when localnet has five simultaneous users already. Then:

  • File systems on remotesite and homesite exist, are authorized access by Bob, are initialized for access, and are working..
  • The network localnet exists, is authorized use by Bob, is initialized for file transfers, and is working.
  • Localnet is compatible with remotesite and homesite.
  • Executable ftp exists on homesite, Bob is authorized to use it, it is initialized, and it is working.
  • Executable ftp is compatible with the file system on homesite.
  • Executable ftp is compatible with localnet.
  • Executable ftp is compatible the file system on remotesite.
  • The file system on homesite can hold files of 50,000 bytes.
  • Localnet can transfer files of 50,000 bytes and handle six simultaneous users.
logical criteria for when to deceive in a plan
Logical criteria for when to deceive in a plan
  • Besides the logical consistency conditions mentioned earlier, use some deception pragmatics.
  • For instance, if the attacker downloads a rootkit:
    • Don’t deceive immediately by saying the system cannot – downloading is a key attack step so interfering with it looks suspicious.
    • Instead, wait until the attacker tries to decompress it, then pretend the decompression software is faulty.
some deception tactics
Some deception tactics

Suppose you want to do an action X against your enemy.

  • Stealth: Do X but don’t reveal it.
  • Excuse: Do X and give a false excuse why.
  • Equivocation: Do X and give a correct but misleading reason why.
  • Outright lying: Do X but claim you didn’t.
  • Overplay: Do deception Y ostentatiously to conceal deception X.
  • Reciprocal: Encourage enemy to lie to you about Y to make it easier for you to lie about X.
rating deception opportunities
Rating deception opportunities

Use as factors:

  • A priori likelihood of lack of problems with the associated facet of availability;
  • A priori likelihood of lack of problems with the associated resource;
  • Whether the resource is created (which makes resource denial more plausible); and
  • Suspiciousness of the associated command (it increases deception desirability).
example output of a logical deception planner
Example output of a logical deception planner
  • For command ftp(hackerhome,patsy): Abort execution of site(hackerhome) with an error message. [weight 0.275]
  • For command overflow(buffer,port80,patsy): Abort execution of buffer(port80) with an error message. [weight 0.265]
  • For command overflow(buffer,port80,patsy): Abort execution of system(patsy) with an error message. [weight 0.265]
  • For command overflow(buffer,port80,patsy): Abort execution with an error message about buffer_of(patsy,port80) . [weight 0.267]
  • For command ftp(hackerhome,patsy): Lie by saying the user is not authorized for site(hackerhome). [weight 0.260]
  • For command ftp(hackerhome,patsy): Lie by saying credentials cannot be confirmed for site(hackerhome). [weight 0.260]
  • For command overflow(buffer,port80,patsy): Lie by saying the user is not authorized for buffer(port80). [weight 0.251]
mecounter our tool for counterplanning
MECOUNTER, our tool for counterplanning
  • Uses methods from artificial intelligence
  • Supports multi-agent simulations with goals and capabilities for each agent
  • Uses hierarchical planning for both planner and counterplanner
  • Models communications and incomplete knowledge
  • Reasons about conflicts between actions
  • Supports action probabilities
example simple cyber attack plan for a rootkit
Example simple cyber-attack plan for a rootkit

install rootkit

obtain admin status

install secure port X server

download rootkit

cause buffer overflow in port X

download port X upgrade

test rootkit

connect to target machine on port X

decompress rootkit

ftp to hacker archive

ftp to port X site

decompress rootkit

logout

close ftp connection

scan local network for ports with known vulnerabilities

close ftp connection

guess password of account on target machine

learn local network topology

login as admin

check for newly discovered vulnerabilities of common software

logout

the test example a rootkit installation plan
The test example: A rootkit installation plan
  • Models 98 possible actions in 19 categories
  • Models 115 possible facts for states (and each can be negated)
  • Models communication between system and user by “order” and “report” actions
  • Permits 13 random events with actions
  • Defines 3072 starting states
  • Goals: Install rootkit and backdoor, then log out
  • A typical plan needs 50 actions to get from starting state to goal
example specifications for a cyber attack
Example specifications for a cyber-attack

start_state([file(rootkit,hackerhome), file(secureport,hackerhome),

compressed(rootkit,hackerhome), compressed(secureport,hackerhome)]).

goal(hacker,[installed(rootkit,patsy), tested(rootkit,patsy),

installed(secureport,patsy), not(logged_in(_,_)),

not(connected_at_port(_,_)), not(ftp_connection(_,_))]).

recommended([installed(Executable,Target)],install(Executable,Target)).

recommended([status(admin,Target)],get(admin,status,Target)).

precondition(install(Executable,Target),

[status(admin,Target), logged_in(admin,Target), file(Executable,Target),

not(compressed(Executable,Target)), not(ftp_connection(Local,Target))]).

precondition(get(admin,status,Target), [overflowed(buffer,X,Target)]).

deletepostcondition(install(Executable,Target), []).

deletepostcondition(get(admin,status,Target), [status(_,Target)]).

addpostcondition(install(Executable,Target), [installed(Executable,Target)]).

addpostcondition(get(admin,status,Target), [status(admin,Target)]).

duration(Op,Agent,State,D,D2) :- durationmean(Op,M), skill(Agent,S),

D is M/S, D2 is D*0.75.

durationmean(install(Executable,Target), 20).

deception ploys in counterplans
Deception ploys in counterplans
  • “Atomic” ploys can add, delete, or modify single facts in states.
  • Ploys must increase the difficulty of executing a plan.
  • Ploys can imply other ploys.
  • A counterplan is a set of atomic ploys at specified times.
  • Measure effectiveness as expected increase in time to execute the plan (we can only delay attackers:

Δc = c(S,K) + c(K,G) – c(S,G) where

    • c(S,K) is cost to return to a known state
    • c(K,G) is cost to goal of known state
    • c(S,G) is cost to goal of state to which the ploy was applied
example ploy delete admin authorization log out
Example ploy: Delete admin authorization + log out

167

research

vulnerabilities

overflow

buffer

become admin

close port

5

ping

scan

open port

180

96

0

97

0.9

0.8

22

1

2

3

4

6

99

0.1

0.2

ping

107

33

research

vulnerabilities

101

100

close port

35

become admin

close ftp

0.83

10

decompress rootkit

ftp

65

9

100

75

login

download secureport

31

11

download rootkit

61

0.2

70

74

close ftp

0.17

25

ftp

8

7

62

install rootkit

26

0.4

decompress secureport

download secureport

download secureport

download rootkit

85

24

86

0.8

decompress rootkit

0.8

74

55

decompress rootkit

122

29

0.2

48

32

50

0.8

27

30

decompress secureport

0.75

0.2

0.25

install rootkit

install rootkit

test rootkit

0.67

0.9

0.17

34

13

21

22

23

28

download secureport

close ftp

install secureport

decompress secureport

11

47

39

36

30

40

0.33

ftp

test rootkit

test rootkit

test rootkit

0.83

test rootkit

test rootkit

14

15

16

17

18

19

36

24

1

0

25

download secureport

close ftp

install secureport

logout

decompress secureport

0.6

51

open port

102

55

login

103

0.9

106

overflow

buffer

close port

104

0.1

0.1

105

become admin

20

efficient ploy evaluation
Efficient ploy evaluation
  • The same fixplan (to remedy a ploy) can apply to each of a sequence of states via “backward temporal suitability inheritance”.
  • The same fixplan can inherit downwards to subtasks.
  • For rootkit example, we did 500 simulation runs to get 10,276 states; there were 70 ploys, for 616,560 ploy-to-state matches.
  • Only 18,835 were found useful after pruning; fixplans averaged 6.6 steps.
generic excuses for counterplanning
“Generic excuses” for counterplanning
  • One broad false excuse is more convincing than multiple excuses.
  • Example such “generic excuses”:
    • the network is down
    • the file system is messed up
    • “you broke something”
    • the system is being tested
    • security policy changed
    • communications defaults have been changed
    • a practical joker is operating
    • the system was compromised by a hacker
a testbed for deception in cyberspace
A testbed for deception in cyberspace
  • To be scientific, information assurance needs to do experiments.
  • The best experiments are against real attackers.
  • We have built a modified high-interaction honeypot to study reactions to our defensive deception methods.
  • It tries various deceptions and reports what happens.
snort alert trends over time
Snort alert trends over time
  • Totaling over each week gives clearer trends.
  • But we also tried three clustering methods:
    • Cluster events from same IP address with 10 minutes.
    • Cluster sequences of alerts occurring more than 3 times.
    • Do K-Means clustering with 12 properties of alerts.
  • We observed a dramatic decrease in the number of attacks from when honeypot was first installed.
  • We observed big increases in ICMP messages whenever honeypot came back online.
last alert and last packet analysis
Last-alert and last-packet analysis
  • Sophisticated attackers weren’t much interested in our honeypot – we usually kept it up-to-date with the latest patches.
  • So what did they see that convinced them to leave? Maybe we can “vaccinate” machines against sophisticated attackers.
  • Two approaches: Look at the last alert generated for an IP address, and look at the last nontrivial packet to an IP address.
packet manipulations
Packet manipulations
  • When reinstalling some software, an attacker got in to the honeypot before we could upgrade patches.
  • So we played with this attacker.
  • We used Snort Inline to manipulate attacker’s packets in several ways: dropping them or changing bits inside them.
  • This caused some adjustment by the attacker.
  • Hence pretending to have the latest operating system version and latest patches is good way to prevent attacks.
experimental responses to deception levels
Experimental responses to deception levels

Level 0: None; Level 1: Steady-state; Level 2: Packet dropping; Level 3: Packet manipulation

conclusions
Conclusions
  • Deception is a relatively new way to defend computer systems.
  • There are many forms of deception to try.
  • Attackers aren’t expecting deception, so you can often easily fool them.
  • You can also get smart attackers to overreact to possible deception – a rare countermeasure that works best against smart attackers.
  • We are testing to try to confirm our methods against real attackers – but it is difficult.