1 / 27

Joao B. D. Cabrera and Raman K. Mehra Scientific Systems Company, Inc.

SSCI #1301 DARPA OASIS PI MEETING – Norfolk, VA - Feb 13-16, 2001 Intelligent Active Profiling for Detection and Intent Inference of Insider Threat in Information Systems.

deiondre
Download Presentation

Joao B. D. Cabrera and Raman K. Mehra Scientific Systems Company, Inc.

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SSCI #1301DARPA OASIS PI MEETING – Norfolk, VA - Feb 13-16, 2001Intelligent Active Profiling for Detection and Intent Inference of Insider Threat in Information Systems Joao B. D. Cabrera and Raman K. Mehra Scientific Systems Company, Inc. Lundy Lewis Wenke Lee Aprisma Inc. North Carolina State Univ. SBIR Phase I Topic No. SB002-039 Contract No. DAAH01-00-C-R027

  2. MotivationNetwork Management Systems and Security * Large infrastructure already in place to perform Network, System, and Applications Management. * Building blocks: SNMP Management, RMON – Standardized tools, widely utilized. * COTS NMSs available that automate several functions: Aprisma’s SPECTRUM Platform, HP’s OpenView, … * Vendors offer NMS solutions integrated with Firewalls, IDSs, Security Scanners, PKI Infrastructure, Anti-Virus Software, etc.

  3. Motivation(cont.) * Management: Fault, Configuration, Accounting, Performance, Security (FCAPS). * NMSs provide a powerful capability for data collection at several levels and time scales. * Lot of emphasis on Fault, Configuration and Performance – Several tools already available. * Utilization of NMSs for Security not fully realized … * Faults, Performance and Security cannot be studied separately …

  4. Motivation(cont.) * A security breach may deteriorate performance and cause a fault … * A fault may render the system vulnerable to attacks … * Deterioration of performance may make users impatient, and more willing to violate security policies to get their jobs done.

  5. Motivation(cont.) * Many of the tools designed for Fault Management and Security Monitoring can be adapted for Security Management. * Alarm Infrastructure already available – Tools to monitor Information Systems and set Alarms * The SSCI/Aprisma/NCSU team has investigated the use of COTS NMS for the Detection of Precursors of Distributed Denial-of-Service Attacks; Precursors were found at the level of MIB traffic variables – Paper available upon request.

  6. ObjectiveDetecting and Responding to Insider Threats * Objective: Investigate the application of NMSs for the monitoring, detection and response of Security Violations carried out by Insiders. * Misuse/Intrusion Tolerance is achieved by having an adequate and timely response. * Technology: Statistical Pattern Recognition and AI for the design of detectors and classifiers; Network Management Systems for data collection and response coordination. * Approach: Utilize the Benchmark Problem for proof-of-concept studies; examine the applicability of NMSs and peripherals for response.

  7. Towards Adequate and Timely Response • Adequate: • High Accuracy – Few False Alarms, Lots of Detections. • Distinguish among attacks – Different attacks elicit different types of response. • Distinguish faults from attacks. • Timely: • Detect the Attack before it is too late to respond.

  8. Question 1: What threats/attacks are your project considering ? * Insider Attacks: Password stealing, unauthorized database access, email snooping, etc. * For proof-of-concept purposes, we will be investigating the Benchmark Problem of System Calls made by Privileged Processes. * However, the technologies and tools we are developing are applicable to any situation in which the observables are sequences of possibly correlated categorical variables.

  9. Question 2: What assumptions does your project make ? 1. Data sets corresponding to normal, malicious and faulty behavior are available for the construction and testing of detection schemes – Training Stage and Testing Stage. 2. The observables for normal, malicious and faulty behavior are sequences of categorical variables. 3. Patterns of malicious activity exist, are detectable, and are learnable by special purpose algorithms – to investigate. 4. If 3. is possible, there is time to take preventive action when malicious activity is detected – to investigate: we may need to redesign the Alarming Infrastructure to enable timely response.

  10. Question 3: What policies can your project enforce ? * If the detection system accuses the presence of malicious activity, a response will be triggered. * For the specific case of the Benchmark Problem typical responses would be to kill the process, or delay its execution till time out. * If the Intent Inference capability is achieved, the response will be suited to the type and gravity of the attack.

  11. Benchmark ProblemDetect malicious activity by monitoring System Calls made by Privileged Processesin Unix * Originally suggested by C. Ko, G. Fink, and K. Levitt – 1994. * Extensively studied by the UNM Group (S. Forrest and others), starting with “A Sense of Self for Unix Processes” – 1996. * Programs: sendmail, lpr, ls, ftp, finger … * Data sets are available for downloading. * These data sets can be used for proof-of-concept studies in the Phase I effort – there is data corresponding to faults and data corresponding to multiple types of malicious activities.

  12. Benchmark Problem(cont.) *Process: The sequence of calls – Example: open, read, mmap, mmap, getrlimit, … *Problem: Given data sets corresponding to normal processes and abnormal processes, produce a scheme to distinguish normal from abnormal. * Sequences of correlated categorical variables – representative of other problems in computer security – user profiling, alarm correlations, etc. * There are 182 possible system calls in the SunOS 4.1.x … *“Typical” sendmail processes make about 1,000 to 50,000 calls …

  13. Benchmark Problem(cont.) *UNM Finding: A relatively small dictionary of short sequences (1318 sequences of length 10 for sendmail) provides a very good characterization of normality for several Unix processes. * The dictionary is constructed using a Training Set of Normal behavior. * Sequences not belonging to this dictionary are called abnormal sequences. * Intrusions are detected if a process contains “too many” abnormal sequences on a given interval – the Locality Frame.

  14. Benchmark Problem(cont.) * A process is flagged as containing an intrusion if the number of abnormal sequences inside at least one Locality Frame is above a threshold.

  15. Benchmark Problem(cont.) • * UNM approach is very simple; However, other methods: • Data Mining (RIPPER) • Hidden Markov Models • lead to roughly the same results. • * Recent work by Lee and others – 2001 IEEE Symposium on Security and Privacy – have associated this finding with the regularity of normal processes. • * The first n-1 calls on a sequence determines the last call with high accuracy – Normal processes are highly predictable.

  16. Benchmark Problem(cont.) * Additionally, experiments by Lee and others – 2001 IEEE Symposium on Security and Privacy - with the 1999 DARPA Intrusion Detection Dataset have shown that classification accuracy is not improved by adding other features … * Main Message: These short sequences are the “right” patterns to look when constructing classifiers for these types of programs. * It is still a matter of investigation if this same approach works for other types of programs.

  17. Benchmark Problem(cont.) • * There is still a lot of room for improvement and investigations – Important issues for having an Adequate and Timely Response: • Fusion of classifiers – Can accuracy be improved by fusing multiple classifiers ? • Intent Inference – Can we distinguish among attacks ? • Distinguishing attacks from faults.

  18. Fusion of Classifiers * Combine several classifiers or anomaly detectors, designed using different methods and/or different features. * Features: Anomaly Counts corresponding to different sequence lengths and different Locality Frame sizes. * Each individual anomaly detector announces a GOF – Goodness of Fit – to the normal data. * These GOFs need to be combined in some way – Simplest solution: Voting Scheme. We utilize a Probabilistic Approach which was shown to be successful for Automatic Target Recognition.

  19. Fusion of Classifiers (cont.)

  20. Intent Inference * We pose the problem of Intent Inference as distinguishing between types of attacks using the sequences of system calls. * From the statistical point of view, this is a classification problem. The main issue is to determine if there are features that cluster the different types of attacks.

  21. Distinguishing Attacks from Faults * It is conceptually the same problem as Intent Inference. * Faults represent one class, while Attacks represent another class.

  22. Summary and Conclusions * We plan to investigate the application of NMSs for the Monitoring, Detection and Response of Security Violations carried out by Insiders. * Intrusion/Misuse Tolerance is achieved by having an adequate and timely response. * Statistical Pattern Recognition and AI will be used for the design of anomaly detectors and classifiers. * We will investigate schemes capable of discriminating among types of attacks and distinguishing faults from attacks. * Classifier Fusion will be investigated, as a tool for increasing accuracy.

  23. SPECTRUM Security Manager Lundy Lewis Director of Research January 17, 2001

  24. Solution Functionality • Provides distributed multi-vendor management of Firewalls, Intrusion Detection Systems (IDSs), Security Scanners, PKI, Directories, Packet Sniffers, and Anti-Virus to create a cohesive security solution • Monitors information from security devices. • correlates all information • executes commands such as setting filters in Firewalls, revoking account or PKI certificate, etc. to protect the infrastructure

  25. Response to Intrusion: • SSM correlates security events • SSM automatically starts audit trail • SSM provides alarm notification through SPECTRUM • SSM provides probable cause/solution and can provide automated responses • SSM provides decision support

  26. SSM Architecture • Acts as a knowledge hub • Accepts information from any source because XML is the standard for communication • Adopts a “plug-in” approach that is the basis for extensibility • Selection of components based on specific needs

  27. Contact Joao Cabrera Lundy Lewis Scientific Systems Company Aprisma Inc. 500 W. Cummings Park, Suite 3000 486 Amherst Street Woburn, MA 01801 Nashua, NH 03063 cabrera@ssci.com lewis@aprisma.com

More Related