Adversary/Defender Model of Risk of Terrorist Attacks using Belief - PowerPoint PPT Presentation

Adversary defender model of risk of terrorist attacks using belief l.jpg
1 / 31

Adversary/Defender Model of Risk of Terrorist Attacks using Belief. SAND 2006-1470C Risk Symposium 2006 Risk Analysis for Homeland Security and Defense: Theory and Application March 20, 2006 John Darby Sandia National Laboratories 505-284-7862. Acknowledgments.

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.

Download Presentation

Adversary/Defender Model of Risk of Terrorist Attacks using Belief

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

Adversary defender model of risk of terrorist attacks using belief l.jpg

Adversary/Defender Modelof Risk of Terrorist Attacks using Belief

SAND 2006-1470C

Risk Symposium 2006

Risk Analysis for Homeland Security and Defense:

Theory and Application

March 20, 2006

John Darby

Sandia National Laboratories


Acknowledgments l.jpg


  • Linguistic Model for Adversary

    • Logic Evolved Decision (LED) Methodology developed at Los Alamos National Laboratory by Terry Bott and Steve Eisenhawer

    • Belief/Plausibility measure of uncertainty added

  • Numerical Model for Defender

    • Uses work and suggestions from Jon Helton, Arizona State University

  • Belief/Plausibility with Fuzzy Sets

    • 1986 paper by Ronald Yager 505-284-7862

Slide3 l.jpg


  • Apply Belief/Plausibility Measure of Uncertainty to Evaluating Risk from Acts of Terrorism

  • Why Belief/Plausibility?

    • We have considerable epistemic (state of knowledge) uncertainty

      • Terrorists acts are not random events, but we have considerable epistemic uncertainty for evaluating them 505-284-7862

Random vs intentional l.jpg

Random vs. Intentional

  • Random event: Earthquake

    • Magnitude of earthquake independent of structures exposed to earthquake

    • Event is a “dumb” failure

  • Intentional Event: Terrorist Blow Up Building

    • Are the Consequences worth the effort to gather the resources to be able to defeat any security systems in place and destroy the building

    • Event is a choice by a thinking, malevolent adversary with significant resources 505-284-7862

Safety risk vs terrorist risk l.jpg

Terrorist Risk (Intentional)

Safety Risk (Random)



Maximum Risk

Maximum Risk





Safety Risk vs. Terrorist Risk 505-284-7862

Belief plausibility for epistemic uncertainty l.jpg

Belief/Plausibility for Epistemic Uncertainty

  • Toss a fair coin

    • Uncertainty is aleatory (random)

    • Probability Heads is ½

    • Probability Tails is ½

  • But if we do not know coin is fair

    • May be two-headed or two-tailed

    • Epistemic (state of knowledge) uncertainty

    • Insufficient information to assign Probability to Heads and Tails

    • Belief/Plausibility for Heads is 0/1

    • Belief/Plausibility for Tails is 0/1

  • With more information (actually tossing the coin) we can reduce Epistemic Uncertainty

  • For Fair Coin we cannot reduce aleatory uncertainty 505-284-7862

Belief and plausibility l.jpg

Belief and Plausibility

  • Belief / Plausibility form a Lower / Upper Bound for Probability

  • Belief is what probability will be

  • Plausibility is what probability could be

  • Similar to a Confidence Interval for a Parameter of a probability distribution; a confidence measure that parameter is in interval, but exactly where in interval is not known

  • Belief/Plausibility both reduce to Probability if Evidence is Specific


Probability is somewhere in [Belief, Plausibility] Interval

Belief 505-284-7862

Fuzzy sets for vagueness l.jpg

Fuzzy Sets for Vagueness

  • Consequences (Deaths) are “Major”

    • “Major” is fuzzy: between about 500 and about 5000 deaths 505-284-7862

Adversary defender l.jpg


  • Adversary (them)

  • Defender (us)

  • Adversary and Defender each have different goals and different states of knowledge

  • Risk = Threat x Vulnerability x Consequence

    • Defender goal: Minimize Risk with available resources

    • Adversary goal: Maximize Consequence with available resources (working assumption)

  • Adversary is the Threat

    • Epistemic Uncertainty for Vulnerability and Consequence

  • Defender knows Vulnerability and Consequence

    • Epistemic Uncertainty for Threat 505-284-7862

Scenario and dependence l.jpg

Scenario and Dependence

  • Scenario defined as: Specific Target, Adversary Resources and Attack Plan

    • Resources are: attributes (numbers weapons, etc.) and knowledge

  • Risk for a Scenario

    • Risk = f x P x C

    • f is frequency of scenario (number per year)

    • P is probability scenario is successful

    • C is consequence

  • P is conditional on scenario (adversary resources and security in place, both physical security and intelligence gathering)

  • C is conditional on scenario (target)

  • Risk is scenario dependent

    • Adversary has choice (not random event)

    • Risk must consider millions of scenarios 505-284-7862

Defender model for a scenario l.jpg

Defender Model for a Scenario

  • Risk = f x P x C

  • f, P, and C are random variables with uncertainty

  • Degrees of Evidence to f, P, C based on state of knowledge

  • Convolution using Belief/Plausibility Measure of Uncertainty 505-284-7862

Example result from defender model l.jpg

Example Result from Defender Model

Probability assumes evidence for an interval uniformly distributed over interval 505-284-7862

Defender ranking of scenarios l.jpg


Scenarios: Ranked

By Decreasing Expected Value




Expected Value of Deaths per Year: f*P*C

Defender Ranking of Scenarios

  • For Belief/Plausibility Expected Value is an Interval [Elow, Ehigh]. Reduces to point (Mean) for Probability

  • Rank by Ehigh, Subrank by Elow

Scenario 505-284-7862

Next level of detail for defender ranking l.jpg

Expected Value of

Likelihood: f


Expected Value of Deaths: P*C


Next Level Of Detail for Defender Ranking 505-284-7862

Adversary model l.jpg

Adversary Model

  • Use surrogate Adversary (Special Forces)

  • Adversary has Choice

    • All Variables of concern must be “OK” or

      we will pick another scenario

    • Recruit Insider? Not unless already placed

    • Large Team? Concern about being detected by Intelligence

    • Uncertainty?

      • Door was green yesterday, is red today…What else changed?

  • Variables for Adversary Decision are Not all Numeric

    • Consequence = Deaths x Economic Damage x Fear in Populace x Damage to National Security x Religious Significance x …..

    • Deaths and Economic Damage are numeric

    • Fear in Populace, Damage to National Security, and Religious Significance are not numeric 505-284-7862

Adversary model16 l.jpg

Adversary Model

  • Linguistic Model

  • Develop Fuzzy Sets for Each Variable

  • Develop Approximate Reasoning Rule Base for Linguistic Convolution of Variables to Reflect Scenario Selection Decision Process (LANL LED process)

  • We are not the Adversary, we try to think like the Adversary

    • Considerable Epistemic Uncertainty

    • Use Belief/Plausibility Measure of Uncertainty Propagated up the Rule Base 505-284-7862

Adversary model17 l.jpg

Adversary Model

  • Assume Adversary Goal is Maximize Expected Consequence

    • Expected Consequence ≡ P x C

    • Expected Consequence is Adversary estimate of Consequence, C, weighted by Adversary estimate of Probability of Success, P 505-284-7862

Example of adversary model l.jpg

Example of Adversary Model

  • Rule Base and Variables

    • Expected Consequence = Probability Of (Adversary) Success x Consequence

    • Probability Of Success = Probability Resources Required Gathered Without Detection x Probability Information Required Can Be Obtained x

      Probability Physical Security System can be Defeated

    • Consequence = Deaths x Damage To National Security

  • Fuzzy Sets

    • Expected Consequence = {No, Maybe, Yes}

    • Probability Of Success = {Low, Medium, High}

    • Consequence = {Small, Medium, Large}

    • Probability Resources Required Gathered Without Detection =

    • {Low, Medium, High}

    • Probability Information Required Can Be Obtained = {Low, Medium, High}

    • Probability Physical Security System can be Defeated = {Low, Medium, High}

    • Deaths = {Minor, Moderate, Major, Catastrophic}

    • Damage To National Security = {Insignificant, Significant, Very Significant} 505-284-7862

Example of adversary model19 l.jpg

Example of Adversary Model

  • Part of Example Rule Base 505-284-7862

Example of adversary model20 l.jpg

Example of Adversary Model

  • Focal Elements (Evidence) for Particular Scenario

    • Deaths: 0.8 for {Major, Catastrophic}

    • 0.2 for {Moderate, Major}

    • Damage To National Security: 0.1 to {Insignificant, Significant}

      0.9 to {Significant, Very Significant}

    • Probability Resources Required Obtained Without Detection:

      0.7 to {Medium}

      0.3 to {Medium, High}

    • Probability Information Required can Be Obtained:

      0.15 to {Medium}

      0.85 to {Medium, High}

    • Probability Physical Security System can be Defeated:

      1.0 to {Medium, High} 505-284-7862

Example of adversary model21 l.jpg

Example of Adversary Model 505-284-7862

Example of adversary model22 l.jpg

Example of Adversary Model 505-284-7862

Example of adversary model23 l.jpg

Example of Adversary Model 505-284-7862

Adversary ranking of scenarios l.jpg

Adversary Ranking of Scenarios

  • Defender thinking like Adversary Ranks by Plausibility

    • Rank scenarios based on the plausibility for the worst fuzzy set for expected consequence, “Yes” in the prior example, sub-ranked by plausibility of the next-worst fuzzy sets, “Maybe” and “No” in the prior example

  • Note: Actual Adversary using the Model would Rank by Belief

    • “We will not attempt a scenario unless we believe it will succeed”… Osama 505-284-7862

Software tools l.jpg

Software Tools

  • Numerical Evaluation of Risk for Defender

    • BeliefConvolution Java code (written by author)

    • RAMAS RiskCalc

  • Linguistic Evaluation for Adversary

    • LinguisticBelief Java code (written by author)

    • LANL LEDTools 505-284-7862

Combinatorics issue for convolution with belief plausibility l.jpg

“Combinatorics” Issue for Convolution with Belief/Plausibility

  • Convolution using Belief/Plausibility must be done at the focal element level

    • Convolution of 20 variables each with 3 focal elements results in a variable with 320 focal elements

  • Need to Condense or Aggregate Degrees of Evidence for Large Scale Problems

    • Must “condense” or “aggregate” focal elements with repeated convolution of variables to reduce number of focal elements to manageable size 505-284-7862

Aggregation in beliefconvolution code l.jpg





Bins (linear or log10)

Aggregation in BeliefConvolution Code 505-284-7862

Aggregation in linguisticbelief code l.jpg

Aggregation in LinguisticBelief Code

  • Aggregation is “Automatic” per Rule Base

    • Focal Elements for Inputs are on Fuzzy Sets For Input variables

    • Focal Elements for Output are on Fuzzy Sets of Output Variable

  • Happiness = Health x Wealth x Outlook on Life

    • Assume Health, Wealth, and Outlook on Life

      each have 3 focal elements: implies Health x Wealth x Outlook on Life has 27 focal elements

    • Happiness = {Not So Good, OK, Good}

    • Evidence on Happiness is aggregated on subsets of

      {Not So Good, OK, Good} so Happiness has at most

      23 = 8 Focal Elements 505-284-7862

Assumptions l.jpg


  • Focal Elements for Different Variables are Noninteracting for Convolution with Belief/Plausibility (independent for Probability)

    • SAND2004-3072 “Dependence in Probabilistic Modeling, Dempster-Shafer Theory, and Probability Boxes”, Ferson et al, Oct. 2004

  • Adversary Goal is to Maximize Expected Consequence

    • Don’t like this goal? Change the rule base! 505-284-7862

Future work l.jpg

Future Work

  • How Consider Millions of Scenarios?

    • Binning and Screening Criteria

    • Can reason at any level if honest about uncertainty

      • What are # deaths from a terrorist attack? [0, 6x109]

    • Use binning/filtering process at successively finer levels of detail, but capture uncertainty at each step

  • Dependence? Risk will always be dependent on scenarios

  • Implement Monte-Carlo Sampling with Belief/Plausibility (work by Helton et al)

    • Eliminates problem of dependence from a repeated variable that cannot be factored

    • Automate evaluation of large scale problems 505-284-7862

Summary l.jpg


  • Adversary and Defender have Different States of Knowledge

  • Defender wants to Minimize Risk

  • Adversary wants to Maximize Expected Consequence (working assumption)

  • Adversary selection of Scenarios(s) is not random

  • Large Epistemic Uncertainty for Defender

    • Use Belief/Plausibility Measure instead of Probabililty Measure

  • Adversary selects Scenario(s) with “high” ranking

    • Defender thinks like adversary using linguistics

    • Linguistic variables modeled with Fuzzy Sets

  • Adversary/Defender Models Developed

    • Implemented in Java software 505-284-7862

  • Login