1 / 28

Automatically Generating Models for Botnet Detection

14th European Symposium on Research in Computer Security (ESORICS ), 2009. Automatically Generating Models for Botnet Detection. Peter Wurzinger , Leyla Bilge, Thorsten Holz , Jan Goebel, Christopher Kruegel , Engin Kirda Vienna University of Technology

ksena
Download Presentation

Automatically Generating Models for Botnet Detection

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 14th European Symposium on Research in Computer Security (ESORICS), 2009 Automatically Generating Models for Botnet Detection Peter Wurzinger, Leyla Bilge, Thorsten Holz, Jan Goebel, Christopher Kruegel, EnginKirda Vienna University of Technology Institute Eurecom, Sophia Antipolis, France University of Mannheim, Germany University of California, Santa Barbara

  2. Outline • Introduction • System Overview • Detection Models • Model Generation • Analyzing Bot Activity • Locating Bot Responses • Extracting Model Generation Data • Generating Detection Models • Command Model Generation • Response Model Generation • Mapping Models into Bro Signatures • Evaluation • Related Work • Limitations • Conclusions

  3. Introduction: abstract • aims to detect bots • independent of any prior information about the C&C channels or propagation vectors • without requiring multiple infections for correlation • target the characteristic fact that every bot receives commands from the botmaster to which it responds in a specific way • models are generated automatically from network traffic traces recorded from actual bot instances

  4. Introduction: botnet • A bot is a type of malware that is written with the intent of compromising and taking control of hosts on the Internet. • It is typically installed on the victim’s computer by either • exploiting a software vulnerability in the web browser or the operating system, or • by using social engineering techniques to trick the victim into installing the bot herself. • The distinguishing characteristic of a bot is its ability to establish a command and control (C&C) channel that allows an attacker to remotely control or update a compromised machine.

  5. Introduction: past approaches • host-based analysis techniques • such as anti-virus (AV) software • network-level • vertical correlation techniques • These techniques focus on the detection of individual bots. • by checking for traffic patterns or content that reveal C&C traffic or malicious, bot-related activities • require prior knowledge about the C&C channels and the propagation vectors of the bots that they can detect • horizontal correlation approaches • analyze the network traffic for patterns that indicate that two or more hosts behave similarly • a command that is sent to several members, bots react in the same fashion • cannot detect individual bots, need at least two bots

  6. Introduction: goal • The authors propose a detection approach to identify single, bot-infected machines • without any prior knowledge about C&C mechanisms • or the way in which a bot propagates • Their detection model leverages the characteristic behavior of a bot • it (bot) receives commands from the botmaster, and • carries out some actions in response to these commands • Similar to previous work, they assume that the command and response activity results in some kind of network communication that can be observed.

  7. Introduction: basic idea • Dynamic approach • by launching a bot in a controlled environment and recording its network activity (traces), we can observe the commands that this bot receives as well as the corresponding responses. • Points • They present techniques that allow identifying points in a network trace that likely correlate with response activity. • Response and Command • They analyze the traffic that precedes this response to find the corresponding command. • Model • generate detection models that can be deployed to scan network traffic for similar activity • Capture the command and response activity • Automated mechanism to generate bot detection models

  8. System Overview: bot collection • The input to the system is a collection of bot binaries form the wild. • honeynet systems such as Nepenthes [2], • or through Anubis [5], a malware collection and analysis platform • The output of our system is a number of models that can be used to detect instances of different bot families.

  9. System Overview: Detection Models • In their system, a detection model has two states. • (1) The first state of the model specifies signs in the network traffic that indicate that a particular bot command is sent. • For example, such a sign could be the occurrence of the string .advscan, which is a frequently-used command to instruct an IRC bot to start scanning. • Once such a command is identified, the detection model is switched into the second state. • (2) This second state specifies the signs that represent a particular bot response. • Such a sign could be the fact that the number of new connections opened by a host is above a certain threshold, which indicates that a scan is in progress. • When a model is in the second state and the system identifies activity that matches the specified behavior, a bot infection is reported. • (X) If no activity is found that matches the specification of the second state for a certain time period, the model is switched back to the first state.

  10. System Overview: correlation • (logical) model instance • When a command is found to be sent to host x, only the model for this host is switched to the second state. • Therefore, there is no correlation between the activity of different hosts. • For example, when a scan command is sent to host x, while immediately thereafter, host y initiates a scan, no alert is raised. • Two states: content-based and network-based.

  11. System Overview: model generation • Finding responses • They first look for the activity that indicates that a response has occurred. (rather than identify command first!) • Because a response launched by a bot is often more visible in the network trace than an incoming command. • Once a bot response is identified, it is characterized by a behavior profile. (Sec 3) • Finding commands • The network traffic before this point must contain the command that has caused this response. • Before each point at which a significant change in traffic behavior is detected, we extract a snippet, a small section of the network trace. • Typically, different commands will lead to responses that are different. • They cluster those traffic snippets that lead to similar responses, assuming that they contain the same command. • Once clusters of related network snippets have been identified, they search them for sets of common (string) tokens. (Sec 4)

  12. System Overview: model generation • Putting it all together. • Extracted tokens can be directly used to represent the bot command in the first state of the detection model. • For the second state, the author leverage the network behavior profiles that characterize bot response activity. • A bot detection model consists of a set of tokens that represent the bot command, followed by a network-level description of the expected response

  13. Analyzing Bot Activity: Locating Bot Responses • by checking for sudden changes in the network traffic • most current bot do so • Change point detection (CPD) problem • CPD algorithms operate on time series, that is, on chronologically ordered sequences of data values. • Their goal is to find those points in time at which the data values change abruptly.

  14. Analyzing BotActivity: CPD & CUSUM • The authors characterize the bot’sbehavior using the features during a given time interval. • traffic profile (a normalized vector) • CUSUM (cumulative sum) for solving CPD problem • The ordered sequence of value d(t) forms the input of next step. (if the d(t) is sufficiently large and a local maximum, a change point at time interval t is determined.) Euclidean distance

  15. Analyzing Bot Activity: CUSUM parameters • local_max • an upper bound for the normal, expected deviation of the present traffic form past • cusum_max • For each time interval t, CUSUM adds d(t)-local_max to a cumulative sum S. • It determines the upper bound that S may reach before a change point is reported. • If there are consecutive time intervals that cause S exceed cusum_max, adjust the time interval. • Their optimized t is 50 seconds.

  16. Extracting Model Generation Data • After the change points are found … • First, extract the snippet of the traffic (contain the commends) • the time interval having change point • + next10 seconds of the following interval (in case the change point occurs close to the interval boundary) • + 30 seconds of the previous interval to cover the command response delay • Second, extract the information required for creating model for response behavior • behavior profile (capture the network-level activity)

  17. Generating Detection Models • Given • a set of network traffic snippets • with their associated response behavior profiles • They are first applied a two phase clustering • (1) arrange snippets such that those are put together in a cluster that likely contain the same command • (2) group the contents of the snippets in each cluster such that elements in a group share commonalities that can be leveraged by the token extraction algorithm.

  18. Generating Detection Models • Observations • The network traffic of a bot responding to a certain command will look similar to the traffic generated by this bot executing the same command at some later time. • The same bot executing a different command will generate traffic that looks different. • Find behavior clustering • To identify behavior clusters, they perform hierarchical clustering based on the normalized response behavior profiles. • Then Sec 4.1, extract command in the same cluster • Sec 4.2, the behavior profile are used to model the response activity.

  19. Command Model Generation • Use a signature generation technique that produces token sequences. • To find common tokens, they use the longest common subsequence algorithm (based on suffix arrays). • Note: different commands may lead to similar responses which may be clustered together. • They employ a standard complete-link, hierarchical clustering algorithm to find payloads that are similar.

  20. Response Model Generation • computing the element-wise average of the (vectors of the) individual behavior profiles • The result is another behavior profile vector that captures the aggregate of the behaviors combined in the respective behavior cluster. • Example • They define a threshold of 1,000 for the number of UDP packets within one time interval (50 seconds), 100 for HTTP packets, 10 for SMTP packets, and 20 for the number of different IPs. • When a response profile exceeds none of these thresholds, the corresponding behavior cluster (and its token sequences) are not used to generate a detection model.

  21. Mapping Models into Bro Signatures • encode the model’s set of token sequences as well as its behavior profile • (state 1) token  regular expression operator • (state 2) if token is matched,Bro starts to record the traffic of the host that triggered a signature (for 50 seconds). • When the observed traffic exceeds, for at least one of these four features, the corresponding value stored in the response profile, it is considered as a match.

  22. Evaluation • 416 different bot + 30 well-known bot “Storm Worm” • Manually analysis: 16 IRC bot families, 1 http, 1 p2p

  23. Example IRC protocol headers to transmit message instruct bot starting scanning scanning parameter Snort signature

  24. Detection Capability • Total 446 network traces • training set: 25% of one bot family’s traces • test set: remaining ones • detection rate: 88% • The remaining 12% did not even trigger state 1 (token). • BotHunter • manually-developed signatures • 69%

  25. Real-World Deployment: False positive • Aachen • 130 token sequence matches, but not second state. • Pure token matching might cause FP. • Greece • These 11 hosts were responsible for 60 alerts. • Manually checking, all FP. Alerts/day is acceptable!

  26. Related Work • Network intrusion detection • Bro, Snort, model normal network traffic • Signature generation • Early Bird, Autograph, Polygraph, and Hamsa • Botnet analysis and defense • horizontal correlation • receiving the same commands and reacting in lockstep • vertical correlation • BotHunter, Snort

  27. Limitations • A possible way of handling this evasion attempt is to randomize the time window, making it harder for the adversary to select an appropriate delay. • Encrypted traffic • difficult to recognize commands

  28. Comments • It is a signature generation paper based on a set of bot traces. • token (content-based) • traffic profile (network-based) • Command + network activity is an interesting perspective. (vs C&C) • Token generation, profile extraction are not main contribution. • Only FP can be examined. (Assume captured traffic is clean.) • Produce Bro signature.

More Related