Algorithms for port of entry inspection finding optimal binary decision trees
Download
1 / 113

Algorithms for Port of Entry Inspection: Finding Optimal Binary Decision Trees - PowerPoint PPT Presentation


  • 99 Views
  • Uploaded on

Algorithms for Port of Entry Inspection: Finding Optimal Binary Decision Trees . Fred S. Roberts Rutgers University . Happy 40 th Birthday Southeastern Conference . Happy 38 th Southeastern Conference, Fred . Happy 40 th Birthday Southeastern Conference .

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Algorithms for Port of Entry Inspection: Finding Optimal Binary Decision Trees' - yitro


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Algorithms for port of entry inspection finding optimal binary decision trees

Algorithms for Port of Entry Inspection: Finding Optimal Binary Decision Trees

Fred S. Roberts

Rutgers University


Happy 40 th birthday southeastern conference

Happy 40th BirthdaySoutheastern Conference


Happy 38 th southeastern conference fred

Happy 38thSoutheastern Conference, Fred


Happy 40 th birthday southeastern conference1

Happy 40th BirthdaySoutheastern Conference

In light of yesterday’s reminiscences, I wanted to see what would happen if I put “Southeastern Conference in Combinatorics, Graph Theory, and Computing” into Google.


Happy 40 th birthday southeastern conference2

Happy 40th BirthdaySoutheastern Conference


Happy 40 th birthday southeastern conference3

Happy 40th BirthdaySoutheastern Conference

In light of yesterday’s reminiscences, I wanted to see what would happen if I put “Fred Hoffman” into Google.


Happy 40 th birthday southeastern conference4

Happy 40th BirthdaySoutheastern Conference


Port of entry inspection algorithms

Port of Entry Inspection Algorithms

  • Goal: Find ways to intercept illicit

  • nuclear materials and weapons

  • destined for the U.S. via the

  • maritime transportation system

  • Goal: “inspect all containers arriving at ports”

  • Even carefully inspecting 8% of containers in Port of NY/NJ might bring international trade to a halt (Larrabbee 2002)


Port of entry inspection algorithms1

  • Aim: Develop decision support algorithms that will help us to “optimally” intercept illicit materials and weapons subject to limits on delays, manpower, and equipment

  • Find inspection schemes that minimize total “cost” including “cost” of false alarms (“false positives”) and failed alarms (“false negatives”)

Port of Entry Inspection Algorithms

Mobile Vacis: truck-mounted gamma ray imaging system


Port of entry inspection algorithms2

Port of Entry Inspection Algorithms

Me on a Coast Guard

boat in a tour of the

harbor in Philadelphia

Thanks to Capt. David

Scott, Captain of Port,

for taking us on the

tour


Sequential decision making problem

  • Stream of containers arrives at a port students to some remarkable places.

  • The Decision Maker’s Problem:

    • Which to inspect?

    • Which inspections next based on previous results?

  • Approach:

    • “decision logics” – Boolean methods

    • combinatorial optimization methods

    • Builds on ideas of Stroud

      and Saeger at Los Alamos

      National Laboratory

    • Need for new models

      and methods

Sequential Decision Making Problem


Sequential diagnosis problem

Sequential Diagnosis Problem students to some remarkable places.

  • Such sequential diagnosis problems arise in many areas:

    • Communication networks (testing connectivity, paging cellular customers, sequencing tasks, …)

    • Manufacturing (testing machines, fault diagnosis, routing customer service calls, …)

    • Medicine (diagnosing patients, sequencing treatments, …)


Sequential decision making problem1

  • Containers students to some remarkable places.arriving to be classified into categories.

  • Simple case: 0 = “ok”, 1 = “suspicious”

  • Inspection scheme: specifies which inspections are to be made based on previous observations

Sequential Decision Making Problem


Sequential decision making problem for container inspection

Sequential Decision Making Problem students to some remarkable places.For Container Inspection

  • Containers have attributes, each

  • in a number of states

  • Sample attributes:

    • Levels of certain kinds of chemicals or biological materials

    • Whether or not there are items of a certain kind in the cargo list

    • Whether cargo was picked up in a certain port


Sequential decision making problem2

  • Currently used attributes students to some remarkable places.:

    • Does ship’s manifest set off an “alarm”?

    • What is the neutron or Gamma emission count? Is it above threshold?

    • Does a radiograph image come up positive?

    • Does an induced fission test come up positive?

Sequential Decision Making Problem

Gamma ray detector


Sequential decision making problem3

  • We can imagine many other attributes students to some remarkable places.

  • The project I have worked on is concerned with general algorithmic approaches.

  • We seek a methodology not tied to today’s technology.

  • Detectors are evolving quickly.

Sequential Decision Making Problem


Sequential decision making problem4

Sequential Decision Making Problem

011001

F(011001)

If attributes 2, 3, and 6 are present, assign container to category F(011001).


Sequential decision making problem5

  • If there are two categories, 0 and 1 (“safe” or present)“suspicious”), the decision function F is a Boolean function.

  • Example:

  • F(000) = F(111) = 1, F(abc) = 0 otherwise

  • This classifies a container as positive iff it has none of the attributes or all of them.

Sequential Decision Making Problem

1 =


Sequential decision making problem6

  • What if there are three categories, 0, ½, and 1?. present)

  • Example:

  • F(000) = 0, F(111) = 1, F(abc) = 1/2 otherwise

  • This classifies a container as positive if it has all of the attributes, negative if it has none of the attributes, and uncertain if it has some but not all of the attributes.

  • I won’t discuss this case.

Sequential Decision Making Problem


Sequential decision making problem7

Sequential Decision Making Problem present)

  • Given a container, test its attributes until know enough to calculate the value of F.

  • An inspection scheme tells us in which order to test the attributes to minimize cost.

  • Even this simplified problem is hard computationally.


Sequential decision making problem8

  • This assumes present)F is known.

  • Simplifying assumption: Attributes are independent.

  • At any point we stop inspecting and output the value of F based on outcomes of inspections so far.

  • Complications: May be precedence relations in the components (e.g., can’t test attribute a4 before testing a6.

  • Or: cost may depend on attributes tested before.

  • F may depend on variables that cannot be directly tested or for which tests are too costly.

Sequential Decision Making Problem


Sequential decision making problem9

  • Such problems are hard computationally. present)

  • There are many possible Boolean functions F.

  • Even if F is fixed, problem of finding a good classification scheme (to be defined precisely below) is NP-complete.

  • Several classes of Boolean functions F allow for efficient inspection schemes:

    • - k-out-of-n systems

  • - Certain series-parallel systems

  • - Read-once systems

  • - “regular” systems

  • - Horn systems

Sequential Decision Making Problem


Sensors and inspection lanes

  • n types of sensors measure presence or absence of the n attributes.

  • Many copies of each sensor.

  • Complication: different characteristics of sensors.

  • Entities come for inspection.

  • Which sensor of a given type to

  • use?

  • Think of inspection lanes and

  • waiting on line for inspection

  • Besides efficient inspection

  • schemes, could decrease costs by:

    • Buying more sensors

    • Change allocation of containers to sensor lanes.

Sensors and Inspection Lanes


Binary decision tree approach

  • Sensors attributes. measure presence/absence of attributes: so 0 or 1

  • Use two categories: 0, 1 (safe or suspicious)

  • Binary Decision Tree:

    • Nodes are sensors or categories

    • Two arcs exit from each sensor node, labeled left and right.

    • Take the right arc when sensor says the attribute is present, left arc otherwise

Binary Decision Tree Approach


Binary decision tree approach1

Binary Decision Tree Approach

Figure 1


Binary decision tree approach2

  • Reach category 1 from the attributes.

  • root only through the path a1

  • to a0 to 1.

  • Container is classified in category 1 iff it has both

  • attributes a0 and a1 .

  • Corresponding Boolean function:

  • F(11) = 1, F(10) = F(01) = F(00) = 0.

  • Note: Different tree, same function

Binary Decision Tree Approach

Figure 1


Binary decision tree approach3

  • Reach category 1 from the attributes.

  • root only through the path a0

  • to 1 or a0 to a1 to 1.

  • Container is classified in category 1 iff it has attribute

  • a0 or attribute a1 .

  • Corresponding Boolean function:

  • F(11) = 1, F(10) = F(01) = 1, F(00) = 0.

Binary Decision Tree Approach

Figure 1


Binary decision tree approach4

  • Reach category 1 from attributes.

  • the root by:

  • a0 L to a1 R a2 R 1 or

  • a0 R a2 R 1

  • Container classified in category 1 iff it has

  • a1 and a2 and not a0 or

  • a0 and a2 and possibly a1.

  • Corresponding Boolean function:

  • F(111) = F(101) = F(011) = 1, F(abc) = 0 otherwise.

Binary Decision Tree Approach

Figure 2


Binary decision tree approach5

Binary Decision Tree Approach

Figure 3


Binary decision tree approach6

Binary Decision Tree Approach


Binary decision tree approach7

  • Even if the Boolean function to different binary decision trees.F is fixed, the problem of finding the “least cost” binary decision tree for it is very hard (NP-complete).

  • For small n = number of attributes, can try to solve it by trying all possible binary decision trees corresponding to the Boolean function F.

  • Even for n = 4, not practical. (n = 4 at Port of Long Beach-Los Angeles)

Binary Decision Tree Approach

Port of Long Beach


Binary decision tree approach8

  • Promising Approaches: to different binary decision trees.

  • Heuristic algorithms, approximations to optimal.

  • Special assumptions about the Boolean function F.

  • For “monotone” Boolean functions, integer programming formulations give promising heuristics.

  • Stroud and Saeger (Los Alamos

  • National Lab) enumerate all

  • “complete, monotone” Boolean functions

  • and calculate the least expensive

  • corresponding binary decision trees.

  • Their method practical for n up to 4, not n = 5.

Binary Decision Tree Approach


Binary decision tree approach9

  • Monotone Boolean Functions: to different binary decision trees.

  • Given two bit strings x1x2…xn, y1y2…yn

  • Suppose that xi yi for all i implies that F(x1x2…xn)  F(y1y2…yn).

  • Then we say that F is monotone.

  • Then 11…1 has highest probability of being in category 1.

Binary Decision Tree Approach


Binary decision tree approach10

  • Monotone Boolean Functions: to different binary decision trees.

  • Given two bit strings x1x2…xn, y1y2…yn

  • Suppose that xi yi for all i implies that F(x1x2…xn)  F(y1y2…yn).

  • Then we say that F is monotone.

  • Example:

  • n = 4, F(x) = 1 iff x has at least two 1’s.

  • F(1100) = F(0101) = F(1011) = 1, F(1000) = 0, etc.

Binary Decision Tree Approach


Binary decision tree approach11

  • Incomplete Boolean Functions: to different binary decision trees.

  • Boolean function F is incomplete if F can be calculated by finding at most n-1 attributes and knowing the value of the input string on those attributes

  • Example: F(111) = F(110) = F(101) = F(100) = 1, F(000) = F(001) = F(010) = F(011) = 0.

  • F(abc) is determined without knowing b (or c).

  • F is incomplete.

Binary Decision Tree Approach


Binary decision tree approach12

  • Complete, Monotone Boolean Functions: to different binary decision trees.

  • Stroud and Saeger: algorithm for enumerating binary decision trees implementing complete, monotone Boolean functions.

  • Feasible to implement up to n = 4.

  • Then you can find least cost tree by enumerating all binary decision trees corresponding to a given complete, monotone Boolean function and repeating this for all complete, monotone Boolean functions.

Binary Decision Tree Approach


Binary decision tree approach13

  • Complete, Monotone Boolean Functions: to different binary decision trees.

  • Stroud and Saeger: algorithm for enumerating binary decision trees implementing complete, monotone Boolean functions.

  • n = 2:

    • There are 6 monotone Boolean functions.

    • Only 2 of them are complete, monotone

    • There are 4 binary decision trees for calculating these 2 complete, monotone Boolean functions.

Binary Decision Tree Approach


Binary decision tree approach14

Binary Decision Tree Approach


Binary decision tree approach15

  • Complete, Monotone Boolean Functions: to different binary decision trees.

  • n = 4:

    • 114 complete, monotone Boolean functions.

    • 11,808 distinct binary decision trees for calculating them.

    • (Compare 1,079,779,602 BDTs for all Boolean functions)

Binary Decision Tree Approach


Binary decision tree approach16

  • Complete, Monotone Boolean Functions: to different binary decision trees.

  • n = 5:

    • 6894 complete, monotone Boolean functions

    • 263,515,920 corresponding binary decision trees.

  • Combinatorial explosion!

  • Need alternative approaches; enumeration not feasible!

  • (Even worse: compare 5 x 1018 BDTs corresponding to all Boolean functions)

Binary Decision Tree Approach


Cost functions

Cost Functions to different binary decision trees.

  • So far, we have figured one binary decision tree is cheaper than another if it has fewer nodes.

  • This is oversimplified.

  • There are more complex costs involved than number of sensors in a tree.


Cost functions1

Cost Functions to different binary decision trees.

  • Stroud-Saeger method applies to more sophisticated cost models, not just cost = number of sensors in the BDT.

  • Using a sensor has a cost:

    • Unit cost of inspecting one item with it

    • Fixed cost of purchasing and deploying it

    • Delay cost from queuing up at the sensor station

  • Preliminary problem: disregard fixed and delay costs. Minimize unit costs.


Cost functions delay costs

  • Tradeoff between fixed costs and delay costs: Add more sensors cuts down on delays.

  • More sophisticated models describe the process of containers arriving

  • There are differing delay times for inspections

  • Use “queuing theory” to find average delay times under different models

Cost Functions: Delay Costs


Cost functions2

Cost Functions sensors cuts down on delays.

  • Unit Cost Complication: How many nodes of the decision tree are actually visited during average container’s inspection? Depends on “distribution” of containers.

  • Answer can also depend on probability of sensor errors and probability of “bomb” in a container.


Cost functions unit costs tree utilization

Cost Functions: sensors cuts down on delays.Unit CostsTree Utilization

  • In our early models, we assume we are given probability of sensor errors and probability of bomb in a container.

  • This allows us to calculate “expected” cost of utilization of the tree Cutil.


Cost functions3

Cost Functions sensors cuts down on delays.

  • OTHER COSTS:

  • Cost of false positive: Cost of additional tests.

    • If it means opening the container, it’s expensive.

  • Cost of false negative:

    • Complex issue.

    • What is cost of a bomb going off in Manhattan?


Cost functions sensor errors

  • One Approach to False Positives/Negatives: sensors cuts down on delays.

  • Assume there can be Sensor Errors

  • Simplest model: assume that all sensors checking for attribute ai have same fixed probability of saying ai is 0 if in fact it is 1, and similarly saying it is 1 if in fact it is 0.

  • More sophisticated analysis later describes a model for determining probabilities of sensor errors.

  • Notation: X = state of nature (bomb or no bomb)

    • Y = outcome (of sensor or entire inspection process).

Cost Functions: Sensor Errors


Probability of error for the entire tree

A sensors cuts down on delays.

A

B

0

B

0

C

1

C

1

1

0

1

0

Probability of Error for The Entire Tree

State of nature is one (X = 1), presence of a bomb

State of nature is zero (X = 0), absence of a bomb

Probability of false positive

(P(Y=1|X=0))

for this tree is given by

Probability of false negative

(P(Y=0|X=1))

for this tree is given by

P(Y=1|X=0) = P(YA=1|X=0) * P(YB=1|X=0)

+ P(YA=1|X=0) *P(YB=0|X=0)* P(YC=1|X=0)

Pfalsepositive

P(Y=0|X=1) = P(YA=0|X=1) +

P(YA=1|X=1) *P(YB=0|X=1)*P(YC=0|X=1)

Pfalsenegative


Cost function used for evaluating the decision trees
Cost Function used for Evaluating the Decision Trees. sensors cuts down on delays.

CTot =CFalsePositive *PFalsePositive + CFalseNegative *PFalseNegative+ Cutil

CFalsePositive is the cost of false positive (Type I error)

CFalseNegative is the cost of false negative (Type II error)

PFalsePositive is the probability of a false positive occurring

PFalseNegative is the probability of a false negative occurring

Cutil is the expected cost of utilization of the tree.


Cost function used for evaluating the decision trees1
Cost Function used for Evaluating the Decision Trees. sensors cuts down on delays.

CFalsePositive is the cost of false positive (Type I error)

CFalseNegative is the cost of false negative (Type II error)

PFalsePositive is the probability of a false positive occurring

PFalseNegative is the probability of a false negative occurring

Cutil is the expected cost of utilization of the tree.

PFalsePositive and PFalseNegative are calculated from the tree.

Cutil is calculated from tree and probabilities of bomb in container and probability of sensor errors.

CFalsePositive, CFalseNegative are input – given information.


Stroud saeger experiments
Stroud Saeger Experiments sensors cuts down on delays.

  • Stroud-Saeger ranked all trees formed

    from 3 or 4 sensors A, B, C and D

    according to increasing tree costs.

  • Used cost function defined above.

  • Values used in their experiments:

    • CA = .25; P(YA=1|X=1) = .90; P(YA=1|X=0) = .10;

    • CB = 10; P(YC=1|X=1) = .99; P(YB=1|X=0) = .01;

    • CC = 30; P(YD=1|X=1) = .999; P(YC=1|X=0) = .001;

    • CD = 1; P(YD=1|X=1) = .95; P(YD=1|X=0) = .05;

    • Here, Ci = unit cost of utilization of sensor i.

  • Also fixed were: CFalseNegative, CFalsePositive, P(X=1)


Sensitivity analysis
Sensitivity Analysis sensors cuts down on delays.

  • When parameters in a model are not known exactly, the results of a mathematical analysis can change depending on the values of the parameters.

  • It is important to do a sensitivity analysis: let the parameter values vary and see if the results change.

  • So, do the least cost trees change if we change values like probability of a bomb, cost of a false positive, etc?


Stroud saeger experiments our sensitivity analysis
Stroud Saeger Experiments: Our Sensitivity Analysis sensors cuts down on delays.

  • We have explored sensitivity of the Stroud-Saeger conclusions to variations in values of the three parameters:

    CFalseNegative, CFalsePositive, P(X=1)

  • Extensive computer experimentation.

  • Fascinating results.

  • To start, we estimated

    high and low values

    for the parameters.


Stroud saeger experiments our sensitivity analysis1
Stroud Saeger Experiments: Our Sensitivity Analysis sensors cuts down on delays.

  • CFalseNegativewas varied between 25 million and 10 billion dollars

    • Low and high estimates of direct and indirect costs incurred due to a false negative.

  • CFalsePositive was varied between $180 and $720

    • Cost incurred due to false positive

      (4 men * (3 -6 hrs) * (15 – 30 $/hr)

  • P(X=1)was varied between 1/10,000,000 and 1/100,000


Stroud saeger experiments our sensitivity analysis2
Stroud Saeger Experiments: Our Sensitivity Analysis sensors cuts down on delays.

n = 3 (use sensors A, B, C)

  • Varied the parameters

    CFalseNegative, CFalsePositive, P(X=1)

  • We chose the value of one of these parameters from the interval of values

  • Then explored the highest ranked tree as the other two parameters were chosen at random in the interval of values.

  • 10,000 experiments for each fixed value.

  • We looked for the variation in the top-ranked tree and how the top-rank related to choice of parameter values.

  • Very surprising results.


Frequency of top ranked trees when c falsenegative and c falsepositive are varied
Frequency of Top-ranked Trees when sensors cuts down on delays.CFalseNegative and CFalsePositive are Varied

  • 10,000 randomized experiments (randomly selected values of CFalseNegative and CFalsePositive from the specified range of values) for the median value of P(X=1).

  • The above graph has frequency counts of the number of experiments when a particular tree was ranked first or second or third and so on.

  • Only three trees (7, 55 and 1) ever came first. 6 trees came second, 10 came third, 13 came fourth.


Frequency of top ranked trees when c falsenegative and p x 1 are varied
Frequency of Top-ranked Trees when sensors cuts down on delays.CFalseNegative and P(X=1) are Varied

  • 10,000 randomized experiments for the median value of CFalsePositive.

  • Only 2 trees (7 and 55) ever came first. 4 trees came second. 7 trees came third. 10 and 13 trees came 4th and 5th respectively.


Frequency of top ranked trees when p x 1 and c falsepositive are varied
Frequency of Top-ranked Trees when sensors cuts down on delays.P(X=1) and CFalsePositive are Varied

  • 10,000 randomized experiments for the median value of CFalseNegative.

  • Only 3 trees (7, 55 and 1) ever came first. 6 trees came second. 10 trees came third. 13 and 16 trees came 4th and 5th respectively.


Most frequent tree groups attaining the top three ranks

A sensors cuts down on delays.

B

0

B

B

A

A

A

C

C

1

1

1

0

0

C

A

0

0

1

0

0

0

1

1

Most Frequent Tree Groups Attaining the Top Three Ranks.

  • Trees 7, 9 and 10

All the three decision trees have been generated from the same Boolean function: 00000111 representing F(000)F(001)…F(111)

Both Tree 9 and Tree 10 are ranked second and third more than 99% of the times when Tree 7 is ranked first.


Most frequent tree groups attaining the top three ranks1

A sensors cuts down on delays.

B

B

1

1

1

B

A

C

1

1

1

C

A

C

0

0

0

1

1

1

Most Frequent Tree Groups Attaining the Top Three Ranks

  • Trees 55, 57 and 58

All three trees correspond to the same Boolean function: 01111111

Tree ranked 57 is second 96% of the times and tree 58 is third 79 % of the times when tree 55 is ranked first.


Most frequent tree groups attaining the top three ranks2

A sensors cuts down on delays.

B

A

C

A

B

0

0

0

0

0

0

C

C

B

0

0

0

1

1

1

Most Frequent Tree Groups Attaining the Top Three Ranks

  • Trees 1, 3, and 2

All three trees correspond to the same Boolean function: 00000001

Tree 3 is ranked second 98% of times and tree 2 is ranked third 80 % of the times when tree 1 is ranked first.


Most frequent tree groups attaining the top three ranks3
Most Frequent Tree Groups Attaining the Top Three Ranks sensors cuts down on delays.

  • Challenge: Why so few trees?

  • Why these trees?

  • Why so few Boolean functions?

  • Why these Boolean functions?


Stroud saeger experiments sensitivity analysis 4 sensors
Stroud Saeger Experiments: Sensitivity Analysis: 4 Sensors sensors cuts down on delays.

  • Second set of computer experiments: n = 4

    (use sensors, A, B, C, D).

  • Same values as before.

  • Experiment 1: Fix values of two of CFalseNegative,CFalsePositive, P(X=1) and vary the third through their interval of possible values.

  • Experiment 2: Fix a value of one of CFalseNegative,CFalsePositive, P(X=1) and vary the other two.

  • Do 10,000 experiments each time.

  • Look for the variation in the highest ranked tree.


Stroud saeger experiments our sensitivity analysis 4 sensors
Stroud Saeger Experiments: Our Sensitivity Analysis: 4 Sensors

  • Experiment 1: Fix values of two of CFalseNegative,CFalsePositive, P(X=1) and vary the third.


C tot vs c falsenegative for ranked 1 trees trees 11485 9651 and 10129 349
C SensorsTot vs CFalseNegative for Ranked 1 Trees (Trees 11485(9651) and 10129(349))

Only two trees ever were ranked first, and one, tree 11485, was ranked first in 9651 out of 10,000 runs.


C tot vs c falsepositive for ranked 1 trees tree no 11485 10000
C SensorsTot vs CFalsePositive for Ranked 1 Trees (Tree no. 11485 (10000))

One tree, number 11485, was ranked first every time.


C tot vs p x 1 for ranked 1 trees tree no 11485 8372 10129 488 11521 1056
C SensorsTot vs P(X=1) for Ranked 1 Trees (Tree no. 11485(8372), 10129(488), 11521(1056))

Three trees dominated first place. Trees 10201(60), 10225(17) and 10153(7) also achieved first rank but with relatively low frequency.


Tree structure for top trees

a Sensors

a

b

b

b

b

c

c

1

d

c

c

c

1

d

1

1

d

0

0

d

1

1

1

d

d

0

1

1

0

0

1

1

1

0

0

Tree Structure For Top Trees

Tree number 11485

Boolean Expr: 0101011101111111

Tree number 10129

Boolean Expr: 0001011101111111

Note how close the Boolean expressions are


Most frequent tree groups attaining the top three ranks4
Most Frequent Tree Groups Attaining the Top Three Ranks Sensors

  • Same challenge as before: Why so few trees?

  • Why these trees?

  • Why so few Boolean functions?

  • Why these Boolean functions?


Stroud saeger experiments our sensitivity analysis 4 sensors1
Stroud Saeger Experiments: Our Sensitivity Analysis: 4 Sensors

  • Experiment 2: Fix the values of one of CFalseNegative,CFalsePositive, P(X=1) and vary the others.


Stroud saeger experiments our sensitivity analysis 4 sensors2
Stroud Saeger Experiments: Our Sensitivity Analysis: 4 Sensors

  • Experiment 2: Fix the values of one of CFalseNegative,CFalsePositive, P(X=1) and vary the others.

  • Similar

    results


Conclusions from sensitivity analysis
Conclusions from Sensitivity Analysis Sensors

  • Considerable lack of sensitivity to modification in parameters for trees using 3 or 4 sensors.

  • Very few optimal trees.

  • Very few Boolean functions arise among optimal and near-optimal trees.

  • Surprising results.


New idea searching through a generalized tree space
New Idea: Searching through a Generalized Tree Space Sensors

  • Sometimes adding more possibilities results in being able to do more efficient searches.

  • We expand the space of trees from those corresponding to Stroud and Saeger’s “Complete and Monotonic” Boolean Functions to “Complete and Monotonic” BDTs.

  • Advantages:

    • Unlike Boolean functions, BDTs may not have to consider all sensor inputs to give a final decision.

    • Allow more potentially useful trees to participate in the analysis

    • Help define an irreducible tree space for search operations


Revisiting monotonicity

a b b Sensors

b c a c c a

0 1 0 1 0 c a 1 0 a 1 c

0 1 1 0 0 1 0 1

  • a b c F(abc)

  • 0 0 0 0

  • 0 0 1 0

  • 0 1 0 1

  • 0 1 1 1

  • 1 0 0 0

  • 1 0 1 1

  • 1 1 0 0

  • 1 1 1 1

b c

c c b b

0 a a 1 0 a a 1

0 1 1 0 1 0 0 1

c c

a b b a

b 0 a 1 0 a b 1

0 1 0 1 1 0 0 1

Revisiting Monotonicity

  • Monotonic Decision Trees

    • A binary decision tree will be called monotonic if all the left leaves are class “0” and all the right leaves are class “1”.

  • Example:

All these trees correspond to same monotonic Boolean function

Only one is a monotonic BDT.


Revisiting completeness

a Sensors

b c

c 1 b 1

0 1 0 1

a

c b

b 1 c 1

0 1 0 1

  • a b c F(abc)

  • 0 0 0 0

  • 0 0 1 1

  • 0 1 0 1

  • 0 1 1 1

  • 1 0 0 0

  • 1 0 1 1

  • 1 1 0 1

  • 1 1 1 1

a

c c

b 1 b 1

0 1 0 1

a

b b

c 1 c 1

0 1 0 1

Revisiting Completeness

  • Complete Decision Trees

    • A binary decision tree will be called complete if every sensor occurs at least once in the tree and, at any non-leaf node in the tree, its left and right sub-trees are not identical.

  • Example:


The cm tree space
The CM Tree Space Sensors

complete, monotonic BDTs


Tree neighborhood and tree space
Tree Neighborhood and Tree Space Sensors

  • Define tree neighborhood by giving operations for moving from one tree in CM Tree Space to another.

  • We have developed an algorithm for finding low-cost BDTs by searching through CM Tree Space from a tree to one of its neighbors.


Search operations in tree space

a Sensors

b c

0 c d 1

d 1 0 1

0 1

a

b c

0 c d 1

d 1 b 1

0 1 0 1

SPLIT

Search Operations in Tree Space

  • Split

    Pick a leaf node and replace it with a sensor that is not already present in that branch, and then insert arcs from that sensor to 0 and to 1.


Search operations

a Sensors

b c

0 c d 1

d 1 0 1

0 1

a

b c

0 d d 1

c 1 0 1

0 1

SWAP

Search Operations

  • Swap

    Pick a non-leaf node in the tree and swap it with its parent node such that the new tree is still monotonic and complete and no sensor occurs more than once in any branch.


Search operations1
Search Operations Sensors

  • Merge

    Pick a parent node of two leaf nodes and make it a leaf node by collapsing the two leaf nodes below it, or pick a parent node with one leaf node, collapse both the parent node and its one leaf node, and shift the sub-tree up in the tree by one level.

a

b c

0 c d 1

0 1 0 1

a

b c

0 c d 1

d 1 0 1

0 1

a

b c

0 d d 1

0 1 0 1

MERGE


Search operations2

a Sensors

b c

0 c d 1

d 1 0 1

0 1

a

b c

0 c b 1

d 1 0 1

0 1

REPLACE

Search Operations

  • Replace

    Pick a node with a sensor occurring more than once in the tree and replace it with any other sensor such that no sensor occurs more than once in any branch.


Tree neighborhood and tree space1
Tree Neighborhood and Tree Space Sensors

  • Define tree neighborhood by using these four operations for moving from one tree in CM Tree Space to another.

  • Irreducibility

    • Theorem: Any tree in the CM tree space can be reached from any other tree by using these neighborhood operations repetitively

    • An irreducible CM tree space helps “search” for the cheapest trees using neighborhood operations


Tree neighborhood and tree space2
Tree Neighborhood and Tree Space Sensors

Sketch of Proof of the Theorem:

  • Simple Tree:

    – A simple tree is defined as a CM tree in which every sensor occurs exactly once in such a way that there is exactly one path in the tree with all sensors in it.


Tree neighborhood and tree space3
Tree Neighborhood and Tree Space Sensors

Sketch of Proof of the Theorem:

  • To Prove: Given any two trees τ1, τ2 in CM tree space, τ2 can be reached from τ1 by a sequence of neighborhood operations

  • We prove this in three different steps:

    • 1. Any tree τ1 can be converted to a simple tree τs1

    • 2. Any simple tree τs1 can be converted to any other simple tree τs2

    • 3. Any simple tree τs2 can be converted to any tree τ2


Tree space traversal
Tree Space Traversal Sensors

  • Naïve Idea: Greedy Search

    • Randomly start at any tree in the CM tree space

    • Find its neighboring trees using the above operations

    • Move to the neighbor with the lowest cost

    • Iterate until we find a minimum

  • Problem: The CM Tree space is highly multi-modal (more than one local minimum)!

  • Therefore, we implement a stochastic search algorithm with simulated annealing to find the best tree


Tree space traversal1
Tree Space Traversal Sensors

  • Stochastic Search

    • Randomly start at any tree in CM space

    • Find its neighboring trees, and evaluate each one for its total cost

    • Select next move according to a probability distribution over the neighboring trees

  • To deal with the multimodality of the tree space, we introduce Simulated Annealing:

    • Make more random jumps initially, gradually decrease the randomness and finally converge at the overall minimum


Results searching cm tree space
Results: SensorsSearching CM Tree Space

  • We were able to perform experiments for 3, 4 and 5 sensors, successfully

  • Results show improvement compared to the extensive search method. E.g., for 4 sensors (66,936 trees)

    • 100 different experiments were performed

    • Each experiment was started 10 times randomly at some tree and chains were formed by making stochastic moves in the neighborhood, until we find a local minimum

    • Only 4890 trees were examined on average for every experiment

    • Global minimum was found 82 out of 100 times while the second best tree was found 10 times

    • The method found trees that were less costly than those found by earlier searches of BDTs corresponding to complete, monotonic Boolean functions.


Genetic algorithms based approach
Genetic Algorithms-based Approach Sensors

  • Structure-based neighborhood moves allow very short moves only. Therefore,…

  • Techniques like Genetic Algorithms and Evolutionary Techniques may suggest ways for getting more efficiently to better trees, given a population of good trees


Genetic algorithms based approach1
Genetic Algorithms-based Approach Sensors

  • Started implementing genetic algorithms-based techniques for tree space traversal

  • Basically, we try to get “better” trees from the current population of “good” trees using the basic genetic operations on them:

    • Selection

    • Crossover

    • Mutation

  • Here, “better” decision trees correspond to lower cost decision trees than the ones in the current population (“good”).


Genetic algorithms based approach2
Genetic Algorithms-based Approach Sensors

  • Selection:

    • Select a random, initial population of N trees from CM tree space

  • Crossover:

    – Performed k times between every pair of trees in the current best population, bestPop


Genetic algorithms based approach3
Genetic Algorithms-based Approach Sensors

  • For each crossover operation between two trees, we randomly select a node in each tree and exchange their subtrees

  • – However, we impose certain restrictions on the selection of nodes, so that the resultant trees still lie in CM tree space


Genetic algorithms based approach4
Genetic Algorithms-based Approach Sensors

  • Mutation:

    • Performed after every m generations of the algorithm

    • We do two types of mutations:

      • 1. Generate all neighbors of the current best population and put them into the gene pool

      • 2. Replace a fraction of the trees of bestPop with random trees from the CM tree space


Genetic algorithms based approach5
Genetic Algorithms-based Approach Sensors

  • Only ~1600 trees had to be examined to obtain the 10 best trees for 4 sensors!


Modeling sensor errors

Modeling Sensor Errors


Modeling sensor errors1

  • Threshold Model Sensors:

    • Sensor i has discriminating power Ki, threshold Ti

    • Attribute present if counts exceed Ti

    • Seek threshold values that minimize the overall cost function, including costs of inspection, false positive/negative

    • Assume readings of category 0 containers follow a Gaussian distribution and similarly category 1 containers

    • Simulation approach

Modeling Sensor Errors


Probability of error for individual sensors

T Sensorsi

P(i|X=1)

P(i|X=0)

Characteristics of a typical sensor

Probability of Error for Individual Sensors

  • For ith sensor, the type 1 (P(Yi=1|X=0)) and type 2 (P(Yi=0|X=1)) errors are modeled using Gaussian distributions.

    • State of nature X=0 represents absence of a bomb.

    • State of nature X=1 represents presence of a bomb.

    • i represents the outcome (count) of sensor i.

    • Σi is variance of the distributions

    • PD = prob. of detection, PF= prob. of false pos.

Ki

i


Modeling sensor errors2
Modeling Sensor Errors Sensors

The probability of false positive for the ith sensor is computed as:

P(Yi=1|X=0) = 0.5 erfc[Ti/√2]

The probability of detection for the ith sensor is computed as:

P(Yi=1|X=1) = 0.5 erfc[(Ti-Ki)/(Σ√2)]

erfc = complementary error function erfc(x) = (1/2,x2)/sqrt()

The following experiments have been done using sensors A, B, C and using:

KA = 4.37; ΣA = 1

KB = 2.9;ΣB = 1

KC = 4.6;ΣC = 1

We then varied the individual sensor thresholds TA, TB and TC from -4.0 to +4.0 in steps of 0.4. These values were chosen since they gave us an “ROC curve” for the individual sensors over a complete rangeP(Yi=1|X=0) and P(Yi=1|X=1)


Frequency of first ranked trees for variations in sensor thresholds
Frequency of First Ranked Trees for Variations in Sensor Thresholds

  • 68,921 experiments were conducted, as each Ti was varied through its entire range. (n = 3)

  • The above graph has frequency counts of the number of experiments when a particular tree was ranked first. There are 15 such trees. Tree 37 had the highest frequency of attaining rank one.


Modeling sensor errors3
Modeling Sensor Errors Thresholds

  • A number of trees ranking first in other experiments also ranked first here.

  • Similar results in case of n = 4.

  • 4,194,481 experiments.

  • 244 different trees were ranked first in at least one experiment.

  • Trees ranked first in other experiments also frequently appeared first here.

  • Conclusion: considerable insensitivity to change of threshold.


New approaches to optimum threshold computation
New Approaches to Optimum Threshold Computation Thresholds

  • Extensive search over a range of thresholds has some practical drawbacks:

    • Large number of threshold values for every sensor

    • Large step size

    • Grows exponentially with the number of sensors (computationally infeasible for n > 4)

  • A non-linear optimization approach proves more satisfactory:

    • A combination of Gradient Descent and modified Newton’s methods


Problems with standard approaches
Problems with Standard Approaches Thresholds

  • Gradient Descent Method:

    • Too small step size results in large number of iterations to reach the minimum

    • Too big step size results in skipping the minimum

  • Newton’s Method:

    • The convergence depends largely on the starting point. This method occasionally drifts in the wrong direction and hence fails to converge.

  • Solution: combination of gradient descent and Newton’s methods

  • This works well.


Results threshold optimization
Results: ThresholdsThreshold Optimization

  • Costs of false positive CFalsePositive and false negative CFalseNegative and prior probability of occurrence of a bad container, P(X=1),were fixed as medians of the min and max values given by Stroud and Saeger (same as we used in earlier experiments)

  • We were able to converge to a (hopefully-close-to-minimum) cost every time with a modest number of iterations changing thresholds.


Results threshold optimization1
Results: ThresholdsThreshold Optimization

  • We were able to converge to a (hopefully-close-to-minimum) cost every time with a modest number of iterations changing thresholds. For example:

    • For 3 sensors, it took an average of 0.081 seconds (as opposed to 0.387 seconds using extensive search) to converge to a cost for all 114 trees studied

    • For 4 sensors, it took an average of 0.196 seconds (as opposed to more than 2 seconds using extensive search) to converge to a cost for all 66,936 trees studied

  • In each case, min cost attained with new algorithm was lower, and often much lower, than that attained with extensive search.


Results threshold optimization2
Results: ThresholdsThreshold Optimization

Many times the minimum obtained using the optimization method was considerably less than the one from the extensive search technique.


Closing comments
Closing Comments Thresholds

  • Very few optimal trees; optimality insensitive to changes in parameters.

  • Extensive search techniques become practically infeasible beyond a very small number of sensors

  • Studying an irreducible tree space helps us to “search” for the best trees rather than evaluating all the trees for their cost

  • A new stochastic search algorithm allows us to search for optimum inspection schemes beyond 4 sensors successfully

  • Our new threshold optimization algorithms provide faster ways to arrive at a low tree cost; cost is lower and often much lower than in extensive search


Discussion and future work

  • Future Work Thresholds: Explain why conclusions are so insensitive to variation in parameter values.

  • Future Work: Explore the structure of the optimal trees and compare the different optimal trees.

  • Future Work: Develop methods for

  • approximating the optimal tree.

Discussion and Future Work

Pallet vacis


Discussion and future work1

  • Future work: In the Boolean function model: inferring the Boolean function from observations (partially defined Boolean functions)

  • Discussion and Future Work


    Discussion and future work2

    Discussion and Future Work


    Discussion and future work3
    Discussion and Future Work Thresholds

    • Future work: Because of the rapid growth in number of trees in CM Tree Space when the number of sensors grows, it is necessary to try to reduce the number of trees we need to search through.

    • A notion of tree equivalence could be incorporated when the number of sensors go beyond 5 or 6

    • We hope that incorporating this into our model will enable us to extend our model to a large number of sensors


    • Collaborators on this Work: Thresholds

    • Saket Anand

    • David Madigan

    • Richard Mammone

    • Sushil Mittal

    • Saumitr Pathak

    • Research Support:

    • Dept. of Homeland Security University Programs

    • Domestic Nuclear Detection Office

    • Office of Naval Research

    • National Science Foundation

    • Los Alamos National Laboratory:

    • Rick Picard

    • Kevin Saeger

    • Phil Stroud


    This work has gotten me places I never Thresholds

    thought I’d go.


    This work has also taken me to places to Thresholds

    which I very much enjoy going.


    ad