game theory for security lessons learned from deployed applications n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Game Theory for Security: Lessons Learned from Deployed Applications PowerPoint Presentation
Download Presentation
Game Theory for Security: Lessons Learned from Deployed Applications

Loading in 2 Seconds...

play fullscreen
1 / 40

Game Theory for Security: Lessons Learned from Deployed Applications - PowerPoint PPT Presentation


  • 161 Views
  • Uploaded on

Game Theory for Security: Lessons Learned from Deployed Applications. Milind Tambe Chris Kiekintveld Richard John & teamcore.usc.edu. Motivation: Infrastructure Security. Limited security resources: Selective checking Adversary monitors defenses, exploits patterns.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

Game Theory for Security: Lessons Learned from Deployed Applications


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
    Presentation Transcript
    1. Game Theory for Security:Lessons Learned from Deployed Applications MilindTambe Chris Kiekintveld Richard John & teamcore.usc.edu

    2. Motivation: Infrastructure Security • Limited security resources: Selective checking • Adversary monitors defenses, exploits patterns 50 miles LAX

    3. Game Theory: Bayesian Stackelberg Games Security allocation: (i) Target weights; (ii) Opponent reaction Stackelberg: Security forces commit first Bayesian: Uncertain adversary types Optimal security allocation: Weighted random Strong Stackelberg Equilibrium (Bayesian) NP-hard Adversary Police

    4. Game Theory for Infrastructure Security: Outline • Deployed real world applications • Research challenges • Efficient algorithms: Scale-up to real-world problems • Human adversary: Bounded rationality & observations • Observability: Adversary surveillance uncertainty • Payoff uncertainty: .. • Evaluation • Algorithms: AAMAS(06,07,08,09); AAAI (08,10),...under review… • Algorithmic & experimental GT: AAMAS’09, AI Journal (2010), … • Observability: AAMAS’10,… • Applications: AAMAS Industry track (08,09), AI Magazine (09), Journal ITM (09), Interfaces (10), Informatica (10),…

    5. Deployed Applications: ARMOR & IRIS • IRIS: Randomized allocation of air marshals to flights (October 2009) • ARMOR: Randomized checkpoints & canine at LAX (August 2007)

    6. In Progress • TSA: Towards national deployment Spring’2011 • GUARDS: Currently evaluation, testing • Coast Guard: Pilot demonstration Boston, March 2011 • Los Angeles Sheriff’s Dept: Ticketless travelers… PROTECT Acronym GUARDS

    7. Outline of Presentation • Deployed real world applications • Research challenges • Efficient algorithms: Scale-up to real-world problems • Observability: Adversary surveillance uncertainty • Human adversary: Bounded rationality, observation power • Payoff uncertainty: Payoffs not point values; distributions • Evaluation

    8. Efficient Algorithms Challenges: Combinatorial explosions due to • Defender strategies: Allocations of resources to targets • E.g. 100 flights, 10 FAMS • Attacker strategies: Attack paths • E.g. Multiple attack paths to targets in a city • Adversary types: Adversary strategy combination

    9. SCALE-UP ARMOR ARMOR IRIS-I IRIS-II IRIS-III

    10. ARMOR: Multiple Adversary Types • NP-hard • Previous work: Linear programs using Harsanyi transformation P=0.3 P=0.5 P=0.2

    11. Multiple Adversary Types:Decomposition for Bayesian Stackelberg Games • Mixed-integer programs • No Harsanyi transformation • Significantly more efficient than previous approaches at the time

    12. SCALE-UP ARMOR ARMOR IRIS-I IRIS-II IRIS-III

    13. Federal Air Marshals Service International Flights from Chicago O’Hare Flights (each day) ~27,000 domestic flights ~2,000 international flights Estimated 3,000-4,000 air marshals Massive scheduling problem: How to assign marshals to flights?

    14. IRIS Scheduling Tool

    15. IRIS Scheduling Tool Flight Information Game Model Resources Risk Information Randomized Deployment Schedule Solution Algorithm

    16. IRIS: Large Numbers of Defender Strategies 784 x 512 FAMS: Joint Strategies 4 Flight tours 2 Air Marshals 6 Schedules 1.73 x 1013 Schedules: ARMOR out of memory 100 Flight tours 10 Air Marshals 100,000,000… x 1000…

    17. Addressing Scale-up in Defender Strategies • Security game:Payoffs depend on attacked target covered or not • Target independence • Avoid enumeration of all joint strategies: • Marginals: Probabilities for individual strategies/schedules • Sample required joint strategies: IRIS I and IRIS II • But: Sampling may be difficult if schedule conflicts • IRIS I (single target/flight), IRIS II (pairs of targets) • Branch & Price: Probabilities on joint strategies • Enumerates required joint strategies, handles conflicts • IRIS III (arbitrary schedules over targets)

    18. Explosion in Defender Strategies:Marginals for Compact Representation ARMOR: 10 tours, 3 air marshals Payoff duplicates: Depends on target covered • IRIS MILP similar to ARMOR • 10 instead of 120 variables • y1+y2+y3…+y10 = 3 • Construct samples over tour combos

    19. IRIS Speedups: Efficient Algorithms II ARMOR Actions ARMOR Runtime IRIS Runtime FAMS Ireland 6,048 4.74s 0.09s FAMS London 85,275 ---- 1.57s

    20. IRIS III • Next generation of IRIS • General scheduling constraints • Schedules can be any subset of targets • Resource can be constrained to any subset of schedules • Branch and Price Framework • Techniques for large-scale optimization • Not an “out of the box” solution

    21. IRIS III: Branch and Price:Branch & Bound + Column Generation First Node: all ai [0,1] Second node: a1= 0, arest [0,1] Lower bound 1: a1= 1, arest= 0 Third node: a1,a2= 0, arest [0,1] Lower bound 2: a1= 0, a2= 1, arest= 0 LB last: ak= 1, arest= 0 Not “out of the box” • Bounds and branching: IRIS I • Column generation leaf nodes: • Network flow

    22. Column Generation “Slave” Problem Solution with N joint schedules “Master” Problem (mixed integer program) Target 7 Target 3 (N+1)th joint schedule Restricted set of joint schedules Resource Sink Return the “best” joint schedule to add … Minimum cost network flow: Identifies joint schedule to add … Capacity 1 on all links

    23. Results: IRIS III IRIS II B&P IRIS III

    24. Modeling Complex Security Games • Approach: domain experts supply the model • Experts must understand necessary game inputs • What information is available? Sensitive? • Number of inputs must be reasonable (tens, not thousands) • What models can we solve computationally?

    25. Robustness • We do not know exactly • Strategies/capabilities • Payoffs/preferences • Observation capabilities • … • How can we take this into account? • Richer models • Faster algorithms to solve these models • Sensitivity analysis

    26. Approximating Infinite Bayesian Games Idea: replace point estimates for attacker payoffs with Gaussian distributions

    27. Attacker Surveillance: Stackelbergvs Nash • Defender commits first: • Attacker conducts surveillance • Stackelberg (SSE) • Simultaneous move game: • Attacker conducts no surveillance • Mixed strategy Nash (NE) Set of defender strategies How should a defender compute her strategy? For security games with SSAS: NE = Minimax SSE

    28. Evaluation of Real-World Applications “Application track” papers: • Beyond run-time and optimality • Difficult and not “Solved”!

    29. So how can we evaluate?... No 100% security; are we better off than previous approaches? • Models and simulations • Human adversaries in the lab • Expert evaluation Systems in use for a number of years: internal evaluations • Future: • Newer domains that allow validation in the real-world • Criminologists (TSA)

    30. Models & Simulations I

    31. Human Adversaries In the Lab

    32. Human Adversaries in the Lab • ARMOR: Outperforms uninformed random, not Maximin • COBRA: Anchoring bias, “epsilon-optimal”

    33. Expert Evaluation I April 2008 February 2009 LAX Spokesperson, CNN.com, July 14, 2010: "Randomization and unpredictability is a key factor in keeping the terrorists unbalanced….It is so effective that airports across the United States are adopting this method."

    34. Expert Evaluation II • Federal Air Marshals Service (May 2010): We…have continued to expand the number of flights scheduled using IRIS….we are satisfied with IRIS and confident in using this scheduling approach. James B. Curren Special Assistant, Office of Flight Operations, Federal Air Marshals Service

    35. What Happened at Checkpoints before and after ARMOR-- Not a Scientific Result! Arrest data: • January 2009 • January 3rdLoaded 9/mm pistol • January 9th 16-handguns, • 4-rifles,1-assault rifle; • 1000 rounds of ammo • January 10thTwo unloaded shotguns • January 12thLoaded 22/cal rifle • January 17thLoaded 9/mm pistol • January 22nd Unloaded 9/mm pistol

    36. Deployed Applications: ARMOR, IRIS, GUARDS • Research challenges • Efficient algorithms: Scale-up to real-world problems • Observability: Adversary surveillance uncertainty • Human adversary: Bounded rationality, observation power • …

    37. Future Work I • Adversary uncertainty • (under review) • Payoffs/types • Surveillance • … • Protect networks (AAAI’10 under review)

    38. Future Work II:Game Theory and Human Behavior

    39. Experiment result PT = Prospect theory QRE = Quantal Response Equilibrium

    40. THANK YOU! tambe@usc.edu http://teamcore.usc.edu/security