1 / 44

Proving It CS 236 Advanced Computer Security Peter Reiher May 13, 2008

Proving It CS 236 Advanced Computer Security Peter Reiher May 13, 2008. Groups for This Week. No groups this week No Thursday class No reports due. Outline. Evaluating security research Tools and approaches Some examples. Evaluating Security Solutions.

ericpayne
Download Presentation

Proving It CS 236 Advanced Computer Security Peter Reiher May 13, 2008

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Proving ItCS 236Advanced Computer Security Peter ReiherMay 13, 2008

  2. Groups for This Week • No groups this week • No Thursday class • No reports due

  3. Outline • Evaluating security research • Tools and approaches • Some examples

  4. Evaluating Security Solutions • People have proposed many responses to: • Worms • DDoS • IP spoofing • Buffer overflows • Botnets • Many other types of attacks • How can we tell which ones work?

  5. Possible Approaches • Formal methods • Prove (literally) it works • Testing • Test thoroughly • Advertising • Claim it’s great very loudly

  6. Formal Methods • Great when they’re feasible • Challenges: • Often problem too complex for existing techniques • Demonstrating correspondence between method and real system

  7. Testing Approaches • Generally of two flavors: • Real system testing • Simulation • Each has its strengths and weaknesses • May not have much choice which you use • Dictated by problem characteristics • Sometimes need some of both

  8. Simulation • Allows high scale testing • More reproducible • Complete control of resources involved • Questions of fidelity • Often requires lots of supporting models • E.g., a model of Internet topology • Might be computationally expensive

  9. Testing • Tests real code/hardware/configurations • Can leverage existing stuff • like the Internet • Reproducibility often challenging • Can interfere with others’ work • In security, often too dangerous • Often scale is limited

  10. Tools You Can Use • Testbeds • Traces • Models

  11. Testbeds • Emulab • Planetlab • Deter • Others

  12. Emulab • Large testbed located at University of Utah • Funded initially by NSF and DARPA • Designed to support experiments by researchers worldwide • Probably the first really successful Internet-wide testbed • http://www.emulab.net

  13. Basic Philosophy of Emulab • Provide large pool of machines to entire Internet community • Almost all testing will be done remotely • Almost all testing must be done without intervention by testbed admins • Handle the widest possible kinds of experiments and testing situations

  14. Basic Emulab Approach • Emulab indeed provides large numbers of machines • Around 450 total nodes • But also provides a rich, powerful testing environment • Completely configurable remotely • Designed for simultaneous sharing by many users

  15. Core Emulab Characteristics • Highly configurable • System software • Application software • Network topology and characteristics • Controllable, predictable, repeatable • Good guarantees of isolation from other experiments

  16. Planetlab • A testbed designed to test Internet services • Using nodes deployed widely around the Internet • And software to support safe and controlled sharing of the nodes • Run primarily by Princeton, Berkeley, and Washington • Funding seeded by NSF and DARPA • Strong Intel participation • Other industry involvement, as well • http://www.planet-lab.org

  17. Basic Planetlab Concept • Deploy testbed nodes at many locations throughout Internet • Standardized hardware and software • Allow those who deploy nodes to use the testbed facility • Provide virtual machines to each tester using a node • Allow long-running experiments • Or even semi-permanent services

  18. Planetlab Nodes • Hardware deploying the Planetlab software package • Which supports cheap virtual machines • Otherwise, provides a typical Linux environment • Pretty complete control of virtual machine • But node-based mechanisms to ensure fair and safe sharing of hardware

  19. Planetlab Locations Usually two machines per location 854 nodes at 466 sites (as of 5/12/08)

  20. Planetlab Experiments • Usually run on many Planetlab nodes • By one controlling researcher • Collection of resources across all nodes supporting the experiment is called a Planetlab slice • A multimachine environment for the experiment • Also an organization for cooperating researchers to use • Services run in slices

  21. Deter • Some experiments are risky • In their potential to do unintended harm • Worm experiments are a classic example • Worms try to spread as far as possible • How sure are you that your testbed really constrains them? • Even one major Internet worm incident from an escaped experiment would be a disaster

  22. Confining Risky Experiments • That’s the point of the Deter testbed • Builds on functionality from Emulab • But adds extra precautions to keep bad stuff from escaping the testbed • Also includes set of tools speficially useful for these kinds of experiments

  23. Why Do We Need More Isolation? • DDoS experiments have been run on Emulab • With no known problems • Why not just be careful? • Question is, how careful? • Especially if you’re running real malicious code • Do you really understand it as well as you think?

  24. What Is Deter For? • Security testing, especially of risky code • Worms • DDoS attacks • Botnets • Attacks on routing protocols • Other important element is network scale • Meant for problems of Internet scale • Or at least really big networks

  25. Status of Deter • Working testbed • Similar model to Emulab • Two clusters of nodes • At ISI and UC Berkeley • Connected via high speed link • Has over 300 nodes • http://www.isi.deterlab.net • http://www.isi.edu/deter gets you to a lot of information about the testbed • Funded by NSF and DHS

  26. Traces • Experiments require a workload • Traces are a realistic way to get it • Many kinds of traces are hard to gather for yourself • In some cases, traces are publicly available • Sometimes you can use those

  27. Some Useful Traces • NLANR packet header traces • CAIDA traces and data sets • U. of Oregon Routeviews traces • File system traces • Web traces • Crawdad wireless traces

  28. Useful Experimental Models • In many cases, we can’t test in real conditions • Typically try to mimic real conditions by using models • Workload models • Network topology models • Models of other experimental conditions • There are already useful models for many things • Often widely accepted as valid within certain research communities • Might be better using them than trying to create your own

  29. Some Important Model Categories • Network topology models • Network traffic models

  30. Network Topology Models • Many experiments nowadays investigate network/distributed systems behavior • They need a realistic network to test the system • Usually embedded in testbed hardware • Where do you get that from? • In some cases, it’s obvious or you have a map of a suitable network • In other cases, more challening

  31. Some Popular Topology Generators • GT-ITM • Supports various ways to randomly generate network graphs • BRITE • Parameterizable network generation tool • Tool of choice for Emulab • INET • Topology generator specifically intended to produce Internet-like graphs

  32. Network Traffic Models • Frequently necessary to feed network traffic into an experiment • Could use a trace • But sometimes better to use a generator • The generator needs a model to tell it how to generate traffic • What kind of model?

  33. Different Network Traffic Model Approaches • Trace analysis • Derive properties from traces of network behavior • E.g., Harpoon and Swing • Structural models • Pretend you’re running an application • Generate traffic as it would do • E.g., Netspec

  34. Some Examples • How would you verify . . . • Data tethers? • Infamy? • Onion routing?

  35. Data Tethers File A File A If the laptop is stolen, file A isn’t there

  36. Basic Data Tethers Operations • Tie policies to pieces of data • E.g., “file X cannot leave the office” • Observe environmental conditions • E.g., “leaving the office” • Apply policies to remove files when necessary

  37. So, How Do We Evaluate Data Tethers? • ?

  38. Infamy • Handles botnets by marking traffic from known bots • Lives “somewhere in the network” • Gets reliable list of bot addresses • Marks all packets from those addresses • Destination hosts/border routers can do what they want with marked packets

  39. Infamy in Operation 1.2.3.4 1.2.3.4 1.36.7.125 1.133.2.8 1.2.3.4

  40. So, How Do We Evaluate Infamy? • ?

  41. Onion Routing • Conceal sources and destinations for Internet communications • Using crypo-protected multihop packets • A group of nodes agree to be onion routers • Plan is that many users send many packets through the onion routers

  42. Onion Routing Source Destination Onion routers

  43. Delivering the Message

  44. So, How Do We Evaluate Onion Routing? • ?

More Related