1 / 23

On the Evolution of Adversary Models (from the beginning to sensor networks)

On the Evolution of Adversary Models (from the beginning to sensor networks). Virgil D. Gligor Electrical and Computer Engineering University of Maryland College Park, MD. 20742 gligor@umd.edu Lisbon, Portugal July 17-18, 2007. Overview.

walda
Download Presentation

On the Evolution of Adversary Models (from the beginning to sensor networks)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. On the Evolution of Adversary Models (from the beginning to sensor networks) Virgil D. Gligor Electrical and Computer Engineering University of Maryland College Park, MD. 20742 gligor@umd.edu Lisbon, Portugal July 17-18, 2007

  2. Overview • New Technologies often require a New Adversary Def. • - continuous state of vulnerability 2. Why is the New Adversary Different ? - ex. sensor, mesh networks, MANETs - countermeasures • Challenge: find “good enough” security countermeasures 4. Proposal: Information Assurance Institute

  3. A system without an adversary definition cannot possibly be insecure; it can only be astonishing… … astonishment is a much underrated security vice. (Principle of Least Astonishment)

  4. Why an Adv. Def. is a fundamental concern ? 1. New Technology > Vulnerability ~> Adversary <~> Methods & Tools • sharing user-mode confidentiality and untrusted user- sys. vs. user mode (’62->) • programs& data; integrity breaches; mode programs rings, sec. kernel (’65, ‘72) • - computing utility system penetration; & subsystems FHM (’75)theory/tool(’91)* • (early – mid 1960s) acc. policy models (’71) - shared stateful DoS instances untrusted user DoS = a diff. prob.(83-’85)* servicesprocesses; formal spec. & verif. (’88)* e.g., DBMS, net. protocolsconcurrent, DoS models (’92 -> ) dyn. resource alloc.coord. attacks (early - mid 1970s) • PCs, LANs; read, modify, block, “man in the middle” informal: NS, DS (’78–81) • public-domain Crypto replay, forge active, adaptive semi-formal: DY (‘83) • (mid 1970s) messages network adversary Byzantine (‘82 –>) • crypto attk models (‘84->) • auth. prot. analysis (87->) • internetworking large-scale effects: geo.distributed, virus scans, tracebacks • (mid – late 1980s) worms, viruses, coordinated intrusion detection • DDoS (e.g., flooding) attacks(mid ’90s ->) 2. Technology Cost -> 0, Security Concerns persist

  5. New Technology > New Vulnerability ~> New Adversary Model <~> New Analysis Method & Tools +O(years) +/- O(months) +O(years) … a perennial challenge (“fighting old wars”) Reuse of Old (Secure) Systems & Protocols New Technology ~> New Vulnerability Old Adversary Model mismatch Continuous State of Vulnerability

  6. New Technology Ex.: Sensor Networks Claim Sensor Networks introduce: - new, unique vulnerabilities: nodes captured and replicated - new adversary: different from and Dolev-Yao and traditional Byzantine adv.s and - require new methods and tools: emergent algorithms & properties (for imperfect but good-enough security) Mesh Networks have similar but not identical characteristics

  7. Limited Physical Node Protection Two Extreme Examples High end: IBM 4764 co-proc. (~ $9K) Low end: Smart Cards (< $15) • tamper resistance, real time response • independent battery, secure clock • battery-backed RAM (BBRAM) • wrapping: several layers of non-metallic • grid of conductors in a grounded shield • to reduce detectable EM emanations • tamper detection sensors (+ battery) • temp., humidity, pressure, voltage, • clock, ionizing radiation • - response: erase BBRAM, reset device • no tamper resistance • non-invasive phys. attacks • side-channel (timing, DPA) • unusual operating conditions • temperature, power clock glitches • invasive phys. attacks • chip removal from plastic cover • microprobes, electron beams

  8. Limited Physical Node Protection Observation: a single on-chip secret key is sufficient to protect (e.g., via Authenticated Encryption) many other memory-stored secrets (e.g., node keys) Problem: how do we protect that single on-chip secret key ? Potential Solution: Physically Unclonable Functions (PUFs) observation: each IC has unique timing basicPUF: Challengeextracts unique, secret Response (i.e., secret key) from IC-hidden, unique timing sequence

  9. Basic PUF circuit [Jae W. Lee et al. VLSI ‘04] unknown challenge bit IC b62 feed-fwd arbiter 255 0 1 Response e.g., 255 bits Arbiter b0 b1 b2 b61 b128 LFSR Challenge e.g., 128 bits 0 Arbiter 1 Arbiter bi=0 bi=1 switch Arbiter operation

  10. Basic PUF counters: brute-force attacks (2*128 challenge-response pairs => impractical) duplication (different timing => different Secret Response) invasive attacks (timing modification => different Secret Response) Basic PUF circuit [Jae W. Lee et al. VLSI ‘04] However, Pr. 1: adversary can build timing model of Arbiter’s output => canbuild clone for secret-key generation Pr. 2: Arbiter’ output (i.e., secret-key generation) is unreliable Reality: intra-chip timing variation (e.g., temp, pressure, voltage) => errors in Arbiter’s output (e.g., max. error: 4 – 9%)

  11. Suggested PUF circuit [Ed Suh et al. ISCA ‘05] Solution to Pr. 1: hash Arbiter’s output to provide new Response - cannot discover Arbiter output from known Challenges and new Responses Solution to Pr. 2: add Error Correcting Codes (ECCs) on Arbiter’s output e.g., use BCH(n, k, d) n(timing bits) = k(secret bits) + b(syndrome bits) for (d-1)/2 errors BCH (255,63,61) => 30 (> 10%n > max. no.) errors in Arbiter’s output are corrected > 30 errors ? (probability is 2.4 X10-6) probability of incorrect output is smaller but not zero hash Arbiter’s output and verify against stored Hash(Response)

  12. However, syndrome reveals some (e.g., b=192) bits of Arbiter’s output (n=255) Suggested PUF circuit known Syndrome e.g., 192 bits IC BCH b62 feed-fwd arbiter 255 bits secret Response Arbiter Hash known Challenge e.g., 128 bit b0 b1 b2 b61 b128 LFSR generate response: C -> R, S; retrieve response: C, S -> R (Off-line) Verifiable-Plaintext Attack: Get C, S, hash(R); guess remaining (e.g., 63) bits of Arbiter’s output; verify new R; repeat verifiable guesses until Arbiter’s output is known; discover secret key

  13. Some Characteristics of Sensor Networks 1. Ease of Network Deployment and Extension - scalability => simply drop sensors at desired locations - key connectivityvia key pre-distribution=> neither administrative intervention nor TTP interaction 2. Low Cost, Commodity Hardware - low cost => physical node shielding is impractical => ease of access to internal node state (Q: how good should physical node shielding be to prevent access to a sensor’s internal state ?) 3. Unattended Node Operation in Hostile Areas => adversary can capture, replicate nodes (and node states)

  14. Replicated Node Insertion: How Easy ? 3 Captured Node NEIGHBORHOOD j NEIGHBORHOOD i shared key outside neighborhood 1 path key shared key inside neighborhood NEIGHBORHOOD k i 3 shared key outside neighborhood 2

  15. Attack Coordination among Replicas: How Easy ? Node Replica 1 3 Node Replica 2 3 3 collusion Captured Node NEIGHBORHOOD j NEIGHBORHOOD i 1 NEIGHBORHOOD k i 3 2 Note: Replica IDs are cryptographically bound to pre-distributed keys and cannot be changed

  16. New vs. Old Adversary Old (Dolev-Yao) Adversary can - control network operation - man-in-the-middle: read, replay, forge, block, modify, insert messages anywhere in the network - send/receive any message to/from any legitimate principal (e.g., node) - act as a legitimate principal of the network Old (Dolev-Yao) Adversary cannot 1) adaptively capture legitimate principals’ nodes and discover a legitimate principal’s secrets 2) adaptively modify network and trust topology (e.g., by node replication) Old Byzantine Adversaries - can do 1) but not 2) - consensus problems impose fixed thresholds for captured nodes (e.g., t < n/2, t < n/3) and fixed number of nodes, n.

  17. Countermeasures for Handling New Adv.? • Detection and Recovery • - Ex. Detection of node-replica attacks • Cost ? Traditional vs. Emergent Protocols • Advantage: always possible, good enough detection • Disadvantage: damage possible before detection • Avoidance: early detection of adversary’s presence • - Ex. Periodicmonitoring • - Cost vs. timely detection ? False negatives/positives ? • - Advantage: avoids damage done by new adversary • - Disadvantage: not always practical in MANETs, sensor and mesh networks • Prevention: survive attacks by “privileged insiders” • - Ex. Subsystems that survive administrators’ attacks (e.g., auth) • - Cost vs. design credibility ? Manifest correctness • - Advantage: prevent damage; Disadvantage: very limited use

  18. Example of Detection and Recovery (IEEE S&P, May 2005) • naïve: each node broadcasts <ID, “locator,” signature> • perfect replica detection: ID collisions, different Iocators • complexity: O(n2) messages • realistic: each node broadcasts locally <ID, “locator,” signature> • local neighbors further broadcast to g << nrandom witnesses • good enough replica detection: ID collision, different Iocators at witness • detection probability: 70 - 80% is good enough • complexity: O(n x sqrt(n)) messages

  19. A New App.: Distributed Sensing

  20. A New Application: Distributed Sensing Application: a set of m sensors observe and signal an event - each sensor broadcasts “1” whenever it senses the event; else, it does nothing - if t m broadcasts, all m sensors signal event to neighbors; else do nothing Operational Constraints - absence of event cannot be sensed (e.g., no periodic “0” broadcasts) - broadcasts are reliable and synchronous (i.e., counted in sessions) Adversary Goals: violate integrity(i.e.,issues t  m/2 false broadcasts) deny service (i.e., t > m/2, suppresses m-t+1 broadcasts) New (Distributed-Sensing) Adversary - captures nodes, forges, replays or suppresses (jams) broadcasts (within same or across different sessions) - increases broadcast count with outsiders’ false broadcasts

  21. An Example: distributed revocation decision [IEEE TDSC, Sept. 2005] m=6, t = 4 votes in a session => revoke target Keying Neighborhood revocation target Communication Neighborhood 4 10 3 2 8 14 5 11 propagate revocation decision 1 propagate revocation decision 7 13 9 6 12

  22. New vs. Old Adversary A (Reactive) Byzantine Agreement Problem ? - both global event and its absence are (“1/0”) broadcast by each node - strong constraint on t ; i.e., no PKI => t > 2/3m; PKI => t >m/2 - fixed, known group membership No. New (Distributed-Sensing) Adv. =/= Old (Byzantine) Adv. - new adversary neednot forge, initiate, or replay “0” broadcasts - new adversary’s strength depends on a weakert (e.g., t < m/2) - new adversary may modify membership to increase broadcast count ( > t)

  23. Conclusions • New Technologies => New Adversary Definitions • - avoid “fighting the last war” • - security is a fundamental concern of IT • No single method of countering new and powerful adversaries • - detection • - avoidance (current focus) • - prevention (future) • How effective are the countermeasures ? • - provide “good enough” security; e.g., probabilistic security properties

More Related