1 / 36

Qual Exam

Bill Donkervoet 22 February, 2005. Qual Exam. Sencun Zhu, Sanjeev Setia, Sushil Jajodia, and Peng Ning. An Interleaved Hop-by-Hop Authentication Scheme for Filtering of Injected False Data in Sensor Networks. IHop Problem. Sensor networks consist of many stationary nodes.

benson
Download Presentation

Qual Exam

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Bill Donkervoet 22 February, 2005 Qual Exam

  2. Sencun Zhu, Sanjeev Setia, Sushil Jajodia, and Peng Ning An Interleaved Hop-by-Hop Authentication Scheme for Filtering of Injected False Data in Sensor Networks

  3. IHop Problem • Sensor networks consist of many stationary nodes. • Nodes are scattered and resource-limited. • Compromised nodes can inject false data into network leading to: • Improper decision due to false information. • Wasted power due to extraneous messages.

  4. IHop Goals • Deliver secure, authenticated messages in a sensor network environment. • Detect and drop injected false data as soon as possible to conserve power. • Do all of this as efficiently as possible because of typical constrained resources of sensor networks.

  5. IHop Environment • Sensor network consisting of: • Uncompromised base station (BS). • Cluster of sensor nodes. • Cluster head (CH) that aggregates data from sensor nodes. • Communication nodes link cluster head and base station. • All nodes are vulnerable to compromise.

  6. IHop Details • System parameter t: • System secure as long as <t nodes are compromised. • False data can propagate at most t2 hops. • BS accepts a report if t+1 nodes have verified it. • System uses one well-defined path that can be adjusted if necessary.

  7. Node Association • Each communication node is associated with other nodes t+1 hops away. • Each cluster node is also associated with a communication node. • In below example, t=3.

  8. Initialization • All nodes have unique IDs and key with BS. • Each node establishes a pairwise-key with each neighbor. • Association discovery: • BS sends HELLO message down all nodes and back up. • Nodes discover their upper and lower associated nodes. • Associated nodes establish pairwise keys.

  9. Message Authentication Code • Each node creates up to five MACs: • Individual MAC shared with BS. • Pairwise MACs shared with each associated node. • Pairwise MACs shared with each neighbor node. • Kuv is pairwise key shared by u and v. • XMAC(E) is XOR of multiple MACs of event E.

  10. Message Creation • Each cluster node generates individual MAC of verified event. • Each cluster node also generates pairwise MAC with upper associated node. • CH aggregates messages from cluster nodes: • XORs individual MACs. • Appends pairwise associated MACs. • Message is then sent upstream towards BS.

  11. Generated Report • E, Ci, {v1,v2,v3,CH}, XMAC(E), {MAC(KCHu4,E),MAC(Kv3u3,E), MAC(Kv2u2,E), MAC(Kv1u1,E),} • E is event being reported with timestamp. • Ci is cluster ID (assuming multiple clusters per BS).

  12. Report Verification • Each communication node: • Checks number of pairwise MACs attached to report. • Removes and verifies last pairwise MAC with lower associated node's. • Attaches its pairwise MAC with upper associated node t beginning of MAC list. • Resulting report is forwarded upstream toward BS. • If any step fails, the report is dropped.

  13. BS Verification • Upon receiving the report, the BS: • Computes MACs over E for verifying nodes in list. • XORs resulting MACs and checks against XMAC. • If the MACs do not match, the report is ignored as false. • If the MACs do match, the BS knows that t+1 nodes have verified the report and it is accepted.

  14. Path Repair • IHop depends on an established path for association authentication. • This requires repair of associations upon loss or known-compromise of a node. • However, this goes against the dynamic node-stability and routing nature of sensor networks.

  15. IHop Security • Secure from outside attacks because of hop-by-hop authentication. • Cluster insider attacks: • t compromised cluster nodes generate false event. • 1 uncompromised node u generates true event. • All t+1 MACs must be included to be forwarded. • When u's MAC is checked by upper associated node, it will not match event E and message is dropped.

  16. More Insider Attacks • Compromised communication nodes: • Can forward anything to next upstream node. • Can corrupt reporting or association process. • Typical case: • Compromised node knows t secret keys. • Generates report with those keys. • False report will be dropped after t hops.

  17. Worst Case Insider Attack • Compromised nodes separated by t uncompromised nodes. • During association phase, compromised nodes trick last t nodes giving them compromised IDs. • Thus, a false message can be propagated for t hops t times for a total of t2 uncompromised hops. Y CH X2 X1 Y CH X2 X1 Y CH X2 X1 CH X2 X1

  18. IHop Modification • Rather than check for only lower associated node, a node can check for lower associated set. • This allows false reports to be rejected immediately. • A node must, however, establish pairwise keys with each node in lower associated set.

  19. IHop Results • IHop requires computing two more MACs than regular authenticated hop-by-hop communication. • Additionally, t+1 pairwise MACs are tacked onto each message for transmittal.

  20. IHop Comments • Even if false message is received by BS, it will be rejected unless t+1 nodes have verified it. • Thus, IHop only saves power of nodes by dropping message. • All cluster nodes assumed within broadcast range of CH limiting network layout. • Packet size of TinyOS???

  21. More IHop Comments • IHop only authenticates upstream communication, downstream must use another method such as µTESLA. • Location-based authentication techniques - GPSR

  22. A Scalable Location Service for Geographic Ad Hoc Routing Jinyang Ji, John Jannotti, Douglas De Couto, David Karger, Robert Morris

  23. GLS Environment • Wireless nodes roaming around. • Aware of location and current speed (i.e. GPS). • Each node has unique ID number. • Messages sent from source to destination: • Source looks up location of destination with GLS. • Source sends message hop-by-hop using closest-neighbor. • GLS operates in well-defined grid environment.

  24. Grid Location System Goals • Location service should be well distributed to eliminate bottlenecks. • Unreliable nodes should still produce reliable lookup. • Queries for local hosts should require only local communication. • The location service should scale well to large numbers of nodes.

  25. GLS Grid • Area partitioned into grid • four order-n squares compose one order-(n+1) square. • all order-n squares have lower-left corner at (a2n-1, b2n-1) for integer a, b.

  26. GLS Location Information • All nodes within order-1 square know of each other through periodic HELLO messages. • Node A's location is stored by closest node in other three order-n squares. • Thus, density of location servers is high closer to A and falls off exponentially.

  27. GLS Location Example • Node B's ID is 17. • Location servers are closest nodes in other: • order-1 squares: (2, 23, 63) • order-2 squares: (26, 31, 43) • order-3 squares: (37, 19, 20)

  28. GLS Bootstrapping • Initially, location information must be distributed. • All nodes know all other nodes in order-1 square. • LOCATION UPDATE message sent to each order-2 square where it is routed to closest node. • Then sent to each order-3, then order-4, ....

  29. GLS Location Query • Source node sends query to closest node, A, in its order-1 square (or itself). • A sends query to closest node, B, in its order-2 square. • This continues until the destination is found. • Destination replies directly to sender and all remaining communication is done using closest-neighbor routing.

  30. GLS Query Example • Two queries: • 76 -> 17 • 90 -> 17

  31. GLS Node Mobility • A node updates its location information based on its speed and the location server's order: • order-2: after moving d. • order-n: after moving 2n-2d. • On leaving a square, a node leaves forwarding pointers with others.

  32. GLS Analysis • Query never leaves the smallest order-n square containing source and destination. • Thus, at most n steps are required for lookup. • Simulation using wireless-ns. • Average query only requires 6 more hops than shortest path.

  33. GLS Scalability • Queries succeed on first try even with many nodes. • Traffic scales well with increasing nodes.

  34. GLS Stability • With all nodes unstable, GLS achieves >60% success. • Location database also scales well as network increases.

  35. GLS Comments • Variable-size order-1 squares possible. • More efficient partitioning schemes – rectangle with aspect ratio 1/√2 needs 2/3 location servers. • No error-handling in prototype implementation. • Routing around holes. • Retransmission.

  36. More GLS Comments • Proactive mobility handling: • Account for movement in LOCATION UPDATE. • Redistribute location data before leaving a square. • (dis)advantages, future, etc... • worst-case – possible overlaps to prevent???

More Related