1 / 15

P2P Systems for Worm Detection

P2P Systems for Worm Detection. Joel Sandin Stanford University. Worm Detection. We want to build a distributed system to reliably and quickly detect worm attacks Require both for active response

wharrington
Download Presentation

P2P Systems for Worm Detection

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. P2P Systems for Worm Detection Joel Sandin Stanford University

  2. Worm Detection • We want to build a distributed system to reliably and quickly detect worm attacks • Require both for active response • Already have many ideas about what to monitor – telescopes, anomalous host behavior - and how to detect/respond • How do we prevent an attacker from subverting the detection system itself?

  3. Intrusion Tolerance • Many good solutions, few deployed • Engineer a system that tolerates malicious participants • By allowing malicious participants we can avoid “formal collaboration”, less new technology • How do we collect “good data” for analytical techniques in this setting? • What are the limits of the analytical techniques?

  4. Outline • Discuss some attacks that detector must withstand • Ways to improve quality of collected data • Honeypots to greatly reduce threat of false positives • P2P as a foundation for our detector

  5. Attack 1: False Positives • System must survive worm and related attacks • For new worms, we can’t rely on signatures, so we expose the system to false positives • Two dangers with an attacker that knows about sensors in our system • Can scan sensors to generate alerts • Can develop a worm that avoids them

  6. Attack 2: Infrastructure • Worm might disable sensors • Worm might target sensors themselves for infection • In a distributed deployment attackers may control some networks containing sensors • Attacker can passively watch network to learn sensor locations

  7. What are our sensors? • When a worm is active, sensor should know • 2^20 unused IP addresses • Low cost: 2^8 IP addresses – combine 2^12 of them and you have a detector • The “worst”? A single unused IP address • Advantages of smaller sensors – incentive outweighs cost of deployment, well hidden • Disadvantage: some may be faulty

  8. Consistency Checks to Improve Robustness • In our sensor network, faulty sensors might lie about false events • We don’t just have to listen to a sensor’s claims, we can try to verify them • Verification should be done by other sensors (or aggregator) • The more sensors that are involved in verification, the more fault tolerant the system becomes

  9. Option 1: Telescope Consistency • In ‘trusted’ case, anomalous host behavior (ICMP unreachable) and network telescope activity seen as interchangeable • In untrusted setting, we can monitor both • For sensors’ claim, compute expected reaction, “check” claims from potentially malicious sensors • Check claims of sensors against activities in rest of sensor network (space) • Check claims of sensors against events in subsequent quanta (time)

  10. Consistency Continued • Increases the number of sensors involved in each round of introspection • Use claims to make predictions for future • Can exclude sensors from subsequent rounds to verify claims • Weigh trust in sensors’ claims • Might not be enough – false positives

  11. Option 2: Better Sensors • Increase the cost of generating a false positive • We know how to do this for new worms: Honeypot • Honeypot can run an exposed service and a sensor can monitor the honeypot • Outgoing network activity can indicate infection • For services thought to be secure, a false positive is expensive

  12. Honeypot Consistency • An honest sensor can verify another honeypot’s claim of infection • The verifying sensor reconfigures itself to accept tunneled traffic from the infected honeypot • If the verifying sensor observes the infection, it raises alert • Even with high number of malicious nodes, only a few infections in our system needed to catch worm • Now that false positives become hard, attacker will work to avoid detector altogether

  13. A Fault Tolerant Infrastructure • Use honeypot consistency checks for high fault tolerance • Honeypots are expensive, so deploy cheaper telescope sensors to snag interesting traffic, track trends, and configure/shield the honeypots • High fault tolerance of the system will make it easier to build

  14. Foundation for a System – P2P • Want to connect distributed sensors (IDS, NIDS, honeypots), security problems suggest P2P • Chord (and others) provide self-organizing structure, efficient routing in face of faults, nodes need only a local picture • Work done on Byzantine fault tolerance • Fault tolerant aggregation algorithms to synthesize data from sensors reliably and efficiently (mentioned in zou et al) • Effort of deployment low – but need more

  15. Summary: Research Problems • Getting better data for anomaly detection when some sensors are malicious • Role of honeypots • Building a resilient p2p substrate that gives us the security primitives we need • At the same time, limiting nodes global knowledge

More Related