1 / 41

Network Economics -- Lecture 3: Incentives and games in security

Network Economics -- Lecture 3: Incentives and games in security. Patrick Loiseau EURECOM Fall 2012. References.

chogan
Download Presentation

Network Economics -- Lecture 3: Incentives and games in security

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Network Economics--Lecture 3: Incentives and games in security Patrick Loiseau EURECOM Fall 2012

  2. References • J. Walrand. “Economics Models of Communication Networks”, in Performance Modeling and Engineering, Zhen Liu, Cathy H. Xia (Eds), Springer 2008. (Tutorial given at SIGMETRICS 2008). • Available online: http://robotics.eecs.berkeley.edu/~wlr/Papers/EconomicModels_Sigmetrics.pdf • N. Nisam, T. Roughgarden, E. Tardos and V. Vazirani (Eds). “Algorithmic Game Theory”, CUP 2007. Chapter 17, 18, 19, etc. • Available online: http://www.cambridge.org/journals/nisan/downloads/Nisan_Non-printable.pdf

  3. Outline • Interdependence: investment and free riding • Information asymmetry • Attacker versus defender games

  4. Outline • Interdependence: investment and free riding • Information asymmetry • Attacker versus defender games

  5. Incentive issues in security • Plenty of security solutions… • Cryptographic tools • Key distribution mechanisms • etc. • …useless if users do not install them • Examples: • Software not patched • Private data not encrypted • Actions of a user affects others!  game

  6. A model of investment • Jiang, Anantharam and Walrand, “How bad are selfish investments in network security”, IEEE/ACM ToN 2011 • Set of users N = {1, …, n} • User i invests xi ≥ 0 in security • Utility: • Assumptions:

  7. Free-riding • Positive externality  we expect free-riding • Nash equilibrium xNE • Social optimum xSO • We look at the ratio: • Characterizes the ‘price of anarchy’

  8. Remarks • Interdependence of security investments • Examples: • DoS attacks • Virus infection • Asymmetry of investment importance • Simpler model in Varian, “System reliability and free riding”, in Economics of Information Security, 2004

  9. Price of anarchy • Theorem: and the bound is tight

  10. Comments • is player j’s importance to the society • PoA bounded by the player having the most importance on society, regardless of gi(.)

  11. Examples

  12. Bound tightness

  13. Investment costs • Modify the utility to • The result becomes

  14. Outline • Interdependence: investment and free riding • Information asymmetry • Attacker versus defender games

  15. Information asymetry • Hidden actions • See previous lecture • Hidden information • Market for lemons • Example: software security

  16. Market for lemons • Akerlof, 1970 • Nobel prize in 2001 • 100 car sellers • 50 have bad cars (lemons), willing to sell at $1k • 50 have good cars, willing to sell at $2k • Each know its car quality • 100 car buyers • Willing to buy bad cars for $1.2k • Willing to buy bad cars for $2.4k • Cannot observe the car quality

  17. Market for lemons (2) • What happens? What is the clearing price? • Buyer only knows average quality • Willing to pay $1.8k • But at that price, no good car seller sells • Therefore, buyer knows he will buy a lemon • Pay max $1.2k • No good car is sold

  18. Market for lemon (3) • This is a market failure • Created by externalities: bad car sellers imposes an externality on good car sellers buy decreasing the average quality of cars on the market • Software security: • Vendor can know the security • Buyers have no reason to trust them • So they won’t pay a premium • Insurance for older people

  19. Outline • Interdependence: investment and free riding • Information asymmetry • Attacker versus defender games

  20. Network security [Symantec 2011] • Security threats increase due to technology evolution • Mobile devices, social networks, virtualization • Cyberattacks is the first risk of businesses • 71% had at least one in the last year • Top 3 losses due to cyberattacks • Downtime, employee identity theft, theft of intellectual property • Losses are substantial • 20% of businesses lost > $195k Tendency to start using analytical models to optimize response to security threats

  21. Attacker-defender games • Attackers learn the defense strategies and adapt • Toy example • You observe that every time a thief breaks into your house, it is a week-day • You pay two guards to stay on week-days • Next time, the thief will break into your house at week-end! • Same situation in many applications • Spam detection, intrusion detection  Strategic players

  22. Intrusion detection systems (IDS) • Detect unauthorized use of network • Monitor traffic • Signature based (store signature of known attacks) • Snort • Bro • Anomaly based (compare to “normal” behavior) • Monitoring has a cost • CPU (e.g., for real time) • [Alpcan, Basar 2011]

  23. The simplest game • Attacker: {attack, no attack} ({a, na}) • Defender: {monitoring, no monitoring} ({m, nm}) • Payoffs • “Safe strategy” (or min-max) • Attacker: na • Defender: m if αs>αf, nm if αs<αf m nm m nm a na

  24. The simplest game: Nash equilibrium m nm • Payoffs: • Non-zero sum game • There is no pure strategy NE • Mixed strategy NE: • Neutralize the opponent (make him indifferent) • Opposite of own optimization (indep. own payoff) a na

  25. A Bayesian game formulation • [Liu et al 2006]: player malicious or regular

  26. A Bayesian game formulation (cont’d) • ca: cost of attack • cm: cost of monitoring • w: value of protected asset • α: detection rate • β: false alarm rate Attacker’s payoff: -αw+(1-α)w-ca [Liu et al 2006]

  27. Bayesian Nash equilibrium • If then pure strategy equilibrium • Attack if malicious • Do not monitor • If then no pure strategy equilibrium • Attacker attack with proba • Defender monitors with proba

  28. Classification Games • Defender: Observes # of FS, MS attacks in N periods • Spammer: Non strategic, randomly hits of FS and MS • Spy: Select # of hits H in Nperiods • [Dritsoula, L., Musacchio 2012] 1-p Nature Spammer Mail Server p File Server Spy

  29. Thief or fox? animal thief precious goats OR? “poor” shepherd fox cheaper chickens

  30. Formulation • Spy picks # times H ∈{0, …, N} to hit FS • Defender picks a threshold T ∈{0, …, N+1} • Classifies spy if H≥T • Classifies spammer if H<T • RV: S : # of times a spammer hits FS • Spy cost: JS = cd 1T≤H – ca H • Defender reward UD = p (cd 1T≤H - ca H ) - (1-p)cfaP( S ≥ T ) UD = cd 1T≤H - ca H - (1/p-1)cfaP( S ≥ T ) ~ Rescale

  31. Nash equilibrium is in mixed strategies • Spy seeks to attack just below T • Defender seeks to set T just equal with H  No pure strategy NE, both players mix Proba to hit FS 0 times Proba to set threshold to 0 Proba to set threshold to N+1 Proba to hit FS N times defender’s distribution on thresholds spy’s distribution on # of FS attacks

  32. Payoff formulation in matrix form • Spy’s cost: JS = cd 1T≤H – caH= α’Λβ • Λ: (N+1)x(N+2)matrix defined by • Technicality: add so that Λ>0 • Simple shift • Equilibrium unchanged

  33. Payoff formulation in matrix form (cont’d) • Defender’s payoff UD = cd 1T≤H - ca H - (1/p-1)cfaP( S ≥ T )= α’Λβ – μ’ β : false alarm penalty • Remark: general bimatrix game • JS = α’Λβ, UD = α’Πβ, • Computationally hard • Here: Almost zero-sum game [Gueye et al. 2011] • JS = α’Λβ, UD = α’Λβ – μ’ β

  34. Main theorem • In any NE, the defender’s strategy β maximizes the defendabilityθ(β)= min[Λβ ] - μ’ β • A maximizing value of β exists amongst one of the two forms for some s, with • If there is a unique maximizing β unique NE • [Dritsoula, L., Musacchio 2012]

  35. Main theorem(consequences) • Computation of NE in polynomial time

  36. Conclusion: NE • Defender strategy is of form • … or maybe a mix of these • Search over all of these to find best defendability • Invert a submatrix of Λto find attacker mix b: Mix of Defender threshold strategies or ca/cd ca/cd

  37. Simulation results coincide with theory • Players’ NE strategies for N = 7 θ0 = 0.1, cd = 15, ca = 23, cfa = 10, p = 0.2 θ0 = 0.1, cd = 10, ca = 1, cfa = 10, p = 0.8

  38. Spy’s distribution • Attacker must keep defender “indifferent” on thresholds in his strategy • Defender increasing threshold by T to T+1 • Decreases false alarm penalty ∝ P(S = T) • Thus P(H = T) ∝ P(S = T) to make missed detection penalty increase balance

  39. Spy’s strategy • Spy’s NE strategy is a truncated version of spammer’s distribution + “max” attack N= 100, θ0 = 0.1, cd= cfa=142, ca = 1, p = 0.1

  40. Concluding remarks • Attackers are not dumb if there is big money • Defender must take it into account • Interaction statistical learning / game theory is yet largely under-explored • Application to spam filtering • [Nelson et al. 2009] showed that a spammer that knows your spam filter learning rules and has only 1% of your training set can shape spams to pass through the filter • Thousands of other applications • Exciting research problem

  41. References • [Symantec 2011] “2011 State of Security Survey”, Symantec 2011 • [Alpcan, Basar 2011] “Network Security: A Decision and Game Theoretic Approach”, Alpcan, Basar, CUP 2011 • [Liu et al 2006] “A Bayesian Game Approach for Intrusion Detection in Wireless Ad Hoc Networks”, Liu, Comaniciu, Man, Valuetools 2006 • [Gueye, Walrand, Anantharam 2011], “A Network Topology Design Game: How to Choose Communication Links in an Adversarial Environment?”, Gueye, Walrand, Anantharam, GameNets 2011 • [Nelson et al. 2009] “Misleading Learners: Co-opting Your Spam Filter”, Nelson, Barreno, Chi, Joseph, Rubinstein, Saini, Sutton, Tygar, Xia, In Machine Learning in Cyber Trust: Security, Privacy, Reliability, Springer 2009 • [Dritsoula, L., Musacchio 2012] “Computing the Nash Equilibria of Intruder Classification Games”, LemoniaDritsoula, Patrick Loiseau, John Musacchio, in Proceedings of GameSec2012 • [Dritsoula, L., Musacchio 2012] “A Game-Theoretical Approach for Finding Optimal Strategies in an Intruder Classification Game”, LemoniaDritsoula, Patrick Loiseau, John Musacchio, in Proceedings of IEEE CDC 2012

More Related