an analysis of the 1999 darpa lincoln laboratory evaluation data for network anomaly detection n.
Skip this Video
Loading SlideShow in 5 Seconds..
An Analysis of the 1999 DARPA/Lincoln Laboratory Evaluation Data for Network Anomaly Detection PowerPoint Presentation
Download Presentation
An Analysis of the 1999 DARPA/Lincoln Laboratory Evaluation Data for Network Anomaly Detection

play fullscreen
1 / 27
Download Presentation

An Analysis of the 1999 DARPA/Lincoln Laboratory Evaluation Data for Network Anomaly Detection - PowerPoint PPT Presentation

Download Presentation

An Analysis of the 1999 DARPA/Lincoln Laboratory Evaluation Data for Network Anomaly Detection

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. An Analysis of the 1999 DARPA/Lincoln LaboratoryEvaluation Data for Network Anomaly Detection Matthew V. Mahoney and Philip K. Chan

  2. Data Mining for Computer Security Workshop at ICDM03Melbourne, FLNov 19, 2003

  3. Outline • DARPA/Lincoln Laboratory IDS evaluation (IDEVAL) • Analyze IDEVAL with respect to network anomaly detection • Propose a remedy for identified simulation artifacts • Measure effects on anomaly detection algorithms

  4. 1999 IDEVAL Outside Sniffer Simulated Internet Router Inside Sniffer 201 Attacks Solaris SunOS Linux NT BSM Audit Logs, Directory and File System Dumps

  5. Importance of 1999 IDEVAL • Comprehensive • signature or anomaly • host or network • Widely used (KDD cup, etc.) • Produced at great effort • No comparable benchmarks are available • Scientific investigation • Reproducing results • Comparing methods

  6. 1999 IDEVAL ResultsTop 4 of 18 systems at 100 false alarms

  7. Partially Simulated Net Traffic • tcpdump records sniffed traffic on a testbed network • Attacks are “real”—mostly from publicly available scripts/programs • Normal user activities are generated based on models similar to military users

  8. Related Work • IDEVAL critique (McHugh, 00) mostly based on methodology of data generation and evaluation • Did not include “low-level” analysis of background traffic • Anomaly detection algorithms • Network based: SPADE, ADAM, LERAD • Host based: t-stide, instance-based

  9. Problem Statement • Does IDEVAL have simulation artifacts? • If so, can we “fix” IDEVAL? • Do simulation artifacts affect the evaluation of anomaly detection algorithms?

  10. Simulation Artifacts? • Comparing two data sets: • IDEVAL: Week 3 • FIT: 623 hours of traffic from a university departmental server • Look for features with significant differences

  11. # of Unique Values & % of Traffic

  12. Growth Rate in Feature Values FIT Number of values observed IDEVAL Time

  13. Conditions for Simulation Artifacts • Are attributes easier to model in simulation (fewer values, distribution fixed over time)? • Yes (to be shown next). • Do simulated attacks have idiosyncratic differences in easily modeled attributes? • Not examined here

  14. Exploiting Simulation Artifacts • SAD – Simple Anomaly Detector • Examines only one byte of each inbound TCP SYN packet (e.g. TTL field) • Training: record which of 256 possible values occur at least once • Testing: any value never seen in training signals an attack (maximum 1 alarm per minute)

  15. Train on inside sniffer week 3 (no attacks) Test on weeks 4-5 (177 in-spec attacks) SAD is competitive with top 1999 results SAD IDEVAL Results

  16. Suspicious Detections • Application-level attacks detected by low-level TCP anomalies (options, window size, header size) • Detections by anomalous TTL (126 or 253 in hostile traffic, 127 or 254 in normal traffic)

  17. Proposed Mitigation • Mix real background traffic into IDEVAL • Modify IDS or data so that real traffic cannot be modeled independently of IDEVAL traffic

  18. Mixing Procedure • Collect real traffic (preferably with similar protocols and traffic rate) • Adjust timestamps to 1999 (IDEVAL) and interleave packets chronologically • Map IP addresses of real local hosts to additional hosts on the LAN in IDEVAL (not necessary if higher-order bytes are not used in attributes) • Caveats: • No internal traffic between the IDEVAL hosts and the real hosts

  19. IDS/Data Modifications • Necessary to prevent independent modeling of IDEVAL • PHAD: no modifications needed • ALAD: remove destination IP as a conditional attribute • LERAD: verify rules do not distinguish IDEVAL from FIT • NETAD: remove IDEVAL telnet and FTP rules • SPADE: disguise FIT addresses as IDEVAL

  20. Evaluation Procedure • 5 network anomaly detectors on IDEVAL and mixed (IDEVAL+FIT) traffic • Training: Week 3 • Testing: Weeks 4 & 5 (177 “in-spec” attacks) • Evaluation criteria: • Number of detections with at most 10 false alarms per day • Percentage of “legitimate” detections (anomalies correspond to the nature of attacks)

  21. Criteria for Legitimate Detection • Anomalies correspond to the nature of attacks • Source address anomaly: attack must be on a password protected service (POP3, IMAP, SSH, etc.) • TCP/IP anomalies: attack on network or TCP/IP stack (not an application server) • U2R and Data attacks: not legitimate

  22. Mixed Traffic: Fewer Detections, but More are LegitimateDetections out of 177 at 100 false alarms

  23. Concluding Remarks • Values of some IDEVAL attributes have small ranges and do not continue to grow continuously. Lack of “crud” in IDEVAL. • Artifacts can be “masked/removed” by mixing in real traffic. • Anomaly detection models from the mixed data achieved fewer detections, but a higher percentage of legitimate detections.

  24. Limitations • Traffic injection requires careful analysis and possible IDS modification to prevent independent modeling of the two sources. • Mixed traffic becomes proprietary. Evaluations cannot be independently verified. • Protocols have evolved since 1999. • Our results do not apply to signature detection. • Our results may not apply to the remaining IDEVAL data (BSM, logs, file system).

  25. Future Work • One data set of real traffic from a university--analyze headers in publicly available data sets • Analyzed features that can affect the evaluated algorithms--more features for other AD algorithms

  26. Final Thoughts • Real data • Pros: Real behavior in real environment • Cons: Can’t be released because of privacy concerns (i.e., results can’t be reproduced or compared) • Simulated data • Pros: Can be released as benchmarks • Cons: Simulating real behavior correctly is very difficult • Mixed data • A way to bridge the gap

  27. Tough Questions fromJohn & Josh?