1 / 44

On the Optimal Allocation of Adversarial Resources Stylianos Gisdakis , Panos Papadimitratos

On the Optimal Allocation of Adversarial Resources Stylianos Gisdakis , Panos Papadimitratos. Adviser : Frank,Yeong -Sung Lin Present by Chris Chang. Agenda. Introduction System and adversarial model Adversarial tactics Analysis Conclusion and future work. Agenda. Introduction

beata
Download Presentation

On the Optimal Allocation of Adversarial Resources Stylianos Gisdakis , Panos Papadimitratos

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. On the Optimal Allocation of Adversarial ResourcesStylianosGisdakis, PanosPapadimitratos Adviser: Frank,Yeong-Sung Lin Present by Chris Chang

  2. Agenda • Introduction • System and adversarial model • Adversarial tactics • Analysis • Conclusion and future work

  3. Agenda • Introduction • System and adversarial model • Adversarial tactics • Analysis • Conclusion and future work

  4. Introduction • Wireless sensor networks cover a broad range of mission-critical applications • The nature of these applications, often operating in hostile and adverse environments, makes security indispensable. • There has been a wide gamut of security schemes for wireless sensor networks, for example: • managing cryptographic keys • securing communication • detecting faulty data aggregation • detecting sybilattacks

  5. Introduction • When the adversary compromises multiple sensor nodes; that is, controls their operation, and extracts their private/secret cryptographic material and such an adversary can replicate such cryptographic keys and insert her own misbehaving nodesat will. • Clearly, the more numerous the cryptographic keys and the nodes under the control of the adversary are, the higher her strength is.

  6. Introduction • Assuming that security mechanisms are put in place and considering the above-mentioned strong adversary, taking over a significant fraction of the system nodes. • Where and how should the adversary hit, or equivalently how should the adversary allocate her resources in order to distort the most of the data collected by the victim network?

  7. Introduction • In this paper, the most important thing is when the entire or a large part of the network is of interest for an adversary that does not have overwhelming power. • We model the mission critical network as a set of parts where the adversarycan attack. • The better the choice of attack points, thatis, network parts where the attack is mounted, the higherthe impact and thus the “gain” of the adversary

  8. Introduction • In this paper, we do not venture to reveal vulnerabilities of this WSN. • Rather, we analyze the tactics of the adversary, exactly to shed light on how vulnerable a mission-critical sensor network can be as a function of the adversarial strength. • We see that the problem of identifying an optimal attack, is computationally hard. Thus, we develop an efficient heuristic approach to determine a close-to-optimal attack tactic.

  9. Agenda • Introduction • System and adversarial model • Adversarial tactics • Analysis • Conclusion and future work

  10. System and adversarial model • System and adversarial model • System Model • Adversarial Model • Problem Statement

  11. System and adversarial Model (System Model) • We model a wireless sensor network (WSN) as a set, S, of n clusters S = {C 1 , C 2 , ..., C N}. • We do not dwell on thecluster formation, e.g., the communication topology formation.(clusters are formed according to the requirements of the supported application) • The numberof benign nodes within a cluster C i is defined by a function F ben (Ci) : S → N .

  12. System and adversarial Model (System Model) • The valuations of all the clusters ofthe network are encoded as a vector V = {V 1 , V 2 , ..., V N}. • These values of are either proportional to the number of benign nodes within the specific cluster or context specific. • We termU total to be the utility gained by the adversary by controllingthe whole network so that.

  13. System and adversarial Model (System Model) • Nodes are equipped with aset of cryptographic keys used to ensure the confidentiality,the integrity and the authenticity of the communications among nodes and the sink.

  14. System and adversarial Model (Adversarial Model) • We assume that adversarial resources fall into two categories : • physical devices (R Phy) • cryptographic keys (R Crypto) • R Phy is the number of sensor nodes the adversary controls, either by having introduced them to the system or by having compromised formerly deployed benign nodes. • R Phy<< n, not a large fraction of the total number of sensor nodes

  15. System and adversarial Model (Adversarial Model) • R Cryptois the cryptographic keys that attackerpossesses result from benign node compromised. • R Phy ≤ R Crypto(which means that a compromised node can have more than one cryptographic key.) • we require that one single key cannot be used by more than one node simultaneously in order to not trigger sybildetection schemes.

  16. System and adversarial Model (Adversarial Model) • In our model, the adversary is aware ofthe allocation of benign nodes within each cluster (i.eknowsF ben (C i)). • Function F val : [0, n] → R+maps cluster C itoits value. • The attacker is aware of F val, and as a result canquantify the utility gained by controlling each cluster.

  17. System and adversarial Model (Adversarial Model) • We consider data manipulation, so that the view of the data the WSN user gets is not the actual one but the one the adversary wishes for it. • This attack can be launched against each and any of the clusters. If the manipulation takes place, arbitrarily or within a level wished by the adversary, then we say the adversary won over the cluster. • We term the utility of the adversary as U mal .

  18. System and adversarial Model (Adversarial Model) • Considering two generic types of attack to control a cluster and to manipulate the produced measurements : • Local Majority Attack • Stealthy Data Attack: • Local Majority Attack : • The adversary controls the majority of the nodes in the cluster and she can impair or affect any data collection process. • With the help of a smaller fraction of nodes an attacker can still affect data collection. • Deviations can be products of false measurements injected by the malicious nodes.

  19. System and adversarial Model (Adversarial Model) • Stealthy Data Attack : • To remain undetected in case misbehavior detection mechanisms are in place, adversary controlled nodes report data that differ no more than δfrom the measurements reported by benign cluster members.

  20. System and adversarial Model (Problem Statement) • The adversary can choose which clusters to attack, and this means, in our model, to choose which clusters she will deploy adversarial nodes (out of the available R Phy ). • Then, for each of the nodes allocated, the adversary can choose how many keys to equip each of those nodes with (out of the R Crypto ). • Our goal is to identify the optimal allocation of resources that maximizes her U malgiven the deployment of the benign nodes of the WSN.

  21. Agenda • Introduction • System and adversarial model • Adversarial tactics • Analysis • Conclusion and future work

  22. Adversarial tactics • First, the adversary decides on the subset M of clusters, which when controlled will maximize U mal. • In order to attack a cluster she must allocate at least one physical device into each of the clusters of M.  |M| ≤ R Phy

  23. Adversarial tactics • Since each cluster has a value, the problembecomes the definition of the subset of clusters that will yielda U mal as close to U total as possible, but does not exceed a predefined value termed as target. • This is equivalent to the Subset Sum S(V, U total )that is an NP-Complete combinatorial optimization problem.

  24. Adversarial tactics • To calculate the U mal for a given subset M the attackershould define the optimal distribution of R Crypto among theclusters in the subset. • Besides, R Cryptodoesn’tallow the attacker to control all of the clustersof a subset M, and the problem is equivalent to the 0-1 Knapsack Problemof combinatorial optimization.

  25. Adversarial tactics • Adversarial tactics • Cluster Selection • Resource Allocation per Selected Clusters

  26. Adversarial tactics(Cluster Selection) • In this paper, we use Genetic Algorithms (GAs) whose main algorithmic structure is termed a Chromosome to help us to solve the subset sum problem. • A chromosome : a candidate solution to the optimization problem. • Two basic operators of genetic algorithms are • Mutation • Cross-Over

  27. Adversarial tactics(Cluster Selection) • We apply this Genetic Algorithm as a heuristic for the Cluster Selection problem. • Chromosomes whose number of genes is equal to R Phy(the adversary is physically constrained to R Phy .)

  28. Adversarial tactics(Cluster Selection) • Each gene includes an integer value that represents the index or an identifier of some cluster. • For example, consider chromosome {C 1 , C 5 , C 8 , C 20} : • This candidate solution (chromosome) describes a scenario with an adversary constrained to R Phy= 4 • Attacking clusters C 1 , C 5 , C 8 , and C 20.

  29. Adversarial tactics(Cluster Selection) • In each evolution of the genetic algorithm, chromosomes are evaluated based on the maximum utility they can achieve. • After a defined number of iterations (evolutions of the algorithm), the GA converges to an optimal (or near optimalin case of large scale networks) resource allocation of bothR Crypto and R Phy.

  30. Adversarial tactics(Resource Allocation per Selected Clusters) • F cost : [0, Max] → N : the function that assigns a cost to each cluster. • To launch a majority attack against a cluster C i • The function of cost is : F cost (C i ) = F ben (C i ) + 1 • To launch attacks against data aggregation • The function of cost is : F cost (C i , Δ, δ) → N • This function is the amount of malicious nodes required in order to produce a deviation from the average aggregate equal to Δ by reporting values that are a percentage δ of the average value produced by the rest of the nodes in the cluster.

  31. Adversarial tactics(Resource Allocation per Selected Clusters) • For a chromosome with N genes the problem of optimal allocation of R Crypto is formulated as follows:

  32. Adversarial tactics(Resource Allocation per Selected Clusters) • This maximization program is in fact the formal definition of the 0-1 Knapsack Problem. • In the paper, we use Dynamic Programming tosolve the 0-1 Knapsack Problem . • The output of this algorithm is a vector ,which defines which clusters of the chromosome under evaluation should be selected in order to achieve maximum utility within the resource constraints set by R Crypto .

  33. Agenda • Introduction • System and adversarial model • Adversarial tactics • Analysis • Conclusion and future work

  34. Analysis • Simulation Setup • Results

  35. Analysis (simulation setup) • We implemented our model using the JGAP [5] genetic algorithm package. • For the dynamic programming part of the model we implemented a basic dynamic programming algorithm. • The setup of the experiments included configurations with clusters assigned a number of benign sensor nodes. • In every simulation, the attacker was provided with a number of compromised nodes and a number of cryptographic resources.

  36. Analysis (simulation setup)

  37. Analysis (Results)

  38. Analysis (Results)

  39. Analysis (Results)

  40. Analysis (Results)

  41. Agenda • Introduction • System and adversarial model • Adversarial tactics • Analysis • Conclusion and future work

  42. Conclusion and future work • We consider resilience in the presence of strong and intelligent adversaries which cannot in principle have overwhelming power. • It is necessary for the attacker to know where and how to attack the victim network to maximize the impact of her exploit.

  43. Conclusion and future work • we develop an efficient and effective heuristic that can guide the adversary, notably the allocation of the adversary’s resources, to solve this computationally hard problem, and we also find that our approximate solution is near-optimal. • We will expand our investigation to make it a two-sided one, to cover both the attacker and the system security designer. The latter is in fact our ultimate target.

  44. Thanks for your listening

More Related