ontology and human intelligences in optimization and fusion n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Ontology and Human Intelligences in Optimization and Fusion PowerPoint Presentation
Download Presentation
Ontology and Human Intelligences in Optimization and Fusion

Loading in 2 Seconds...

play fullscreen
1 / 30

Ontology and Human Intelligences in Optimization and Fusion - PowerPoint PPT Presentation


  • 76 Views
  • Uploaded on

Ontology and Human Intelligences in Optimization and Fusion. Moises Sudit October 28, 2013. Gadenfors Conceptual Spaces. Consider a situation where you are walking through the woods:.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Ontology and Human Intelligences in Optimization and Fusion' - keren


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
gadenfors conceptual spaces
Gadenfors Conceptual Spaces

Consider a situation where you are walking through the woods:

Associationist: Travel through one small part at a time, understand features (rocks, rivers, trees, etc.), learn as we go, clear path for next time we travel…

Conceptual: “overhead view” understanding of geometry of paths as features come together (N,S,E,W)…

Symbolic: Semantic street names and directions (left, right, etc.) are given to paths, thus we gain independence from the features…

conceptual spaces
Conceptual Spaces

Conceptual

Space

Domains

Properties

D1

D

D

2

K

  • Overview
    • A conceptual space consists of a set of geometric domains and their associated metrics and corresponding similarity measures
    • A concept is a collection of property regions within these domains, the correlations (i.e., co-occurrences) between these properties, and their salience weights
    • Each concept is additionally characterized by a set of forbidden domain-property pairs
    • A query is a set of points, one in each domain, describing its attributes
introduction to conceptual spaces
Introduction to Conceptual Spaces
  • Properties
  • # Wheels(0, 4-6, >6)
  • Armor Level (Light, Medium, Heavy)
  • Amphibious(Yes, No)
  • Domains
  • # Wheels
  • Armor Level
  • Amphibious

# Wheels

>6

  • Concepts
  • Tank (0, Heavy, No)
  • LAV (>6, Light, Yes)
  • Truck (4-6, Light, No)
  • Jeep (>6, Light, No)

4-6

Armor Level

0

Heavy

No

Medium

Light

  • Benefits
  • Allows similarities btw objects to be calculated
  • More flexible than First Order Logic
  • Transparent

Yes

Amphibious

introduction to conceptual spaces1
Introduction To Conceptual Spaces

# Wheels

d1

d2

Armor Level

Observations (Wheels, Armor, Amphibious)

Amphibious

(4, Light, Yes)

(0, Heavy, No)

(4, Light, No)

two models for conceptual spaces
Two Models for Conceptual Spaces
  • Single Observation Mathematical Model
    • Only one observation is made on one object
    • This object is compared to each individual concept in the library (world) to determine which it is most similar to
  • Multiple Observation Mathematical Model
    • Multiple observations are made from either a single sensor or multiple sensors
    • Observations may not necessarily be of the same object
      • This handles “The Association Problem” in Data Fusion
    • Each observation is compared to each concept to determine which it is most similar to
single observation model
Single Observation Model

We prove a lemma showing that any sized finite set of mutually exclusive properties can be broken into pairs.

Concept  encoded in set of constraints

Observed Object  appears only in the objective function

example decision variables
Example: Decision Variables

We set up a library of 4 concepts (Bomb, Auto, Human, Gas Tank). Each utilize some of the same domains/properties and some different ones.

We run them against 4 observed objects and see how our model works.

example concepts
Example: Concepts

Auto:

Color: black, yellow

Shape: rectangular, short & round

Sound: humming

Relative Size: small

Motion: drives

Bomb:

Color: red, white, brown, black

Shape: rectangular, short & round

Sound: “boom”, explosion

Relative Size: small

example concepts cont
Example: Concepts (cont.)

Gas Tank:

Color: black, grey

Relative Size: medium

Smell: gaseous

Human:

Color: white, black

Shape: short & round, tall & thin

Relative Size: large

Motion: walks

multiple observation model cont
Multiple-Observation Model (cont.)

Maximizes property similarities based on sensor reports.

Constrains the number of properties selected in each domain.

Constrains cross-domain property disallowed pairings.

Allows only one concept to be selected for each observation.

Constrains the number of objects being observed by the sensory system.

what do we have
What do we have?
  • A hybrid Conceptual Space/Integer Programming model that can:
    • Consider multiple observations by multiple sensors
    • Account for the pedigree of each sensor in accordance to its ability to sense each specific property/domain
    • The ability to change the number of allowed objects being observed (m)
  • All of these capabilities are captured within a single, mathematical model using proven optimization techniques
  • How well does it work?
    • Emotion Recognition (compared against Support Vector Machine)
    • Automatic ICON Identification for CPOF
emotion recognition through conceptual spaces
Emotion Recognition through Conceptual Spaces

Fear

True Emotions

Sadness

Anger

False Emotions

Enjoyment

We are taking the BB3 Data and classifying pictures into one of 8 concepts:

4 true emotions and 4 false emotions (attempted deceit)

emotion recognition1
Emotion Recognition
  • Process
    • Images are obtained and analyzed automatically in terms of facial features
    • Facial features are considered in classification of images into emotions, both true emotions and falsified emotions
  • Parts of the Process
emotion recognition2
Emotion Recognition

Several Major Components combine to form Action Units

The presence of Several Action Units at the same time define emotions.

Measurable features calculated based on distances between certain points. Determined by automated systems.

wrinkles

AUi

Anger

Crows feet

Lips Compressed

AUj

Enjoyment

AUk

Fear

AUl

Sadness

Major Components Action Units Emotions

conceptual spaces classification model
Conceptual Spaces – Classification Model

Table below shows the existence of properties in concepts. There are properties that cannot exist together – the constraints handle these.

observations model results
Observations & Model Results
  • Observations
    • Taken from the BB3 Dataset (CUBS) – 344 images analyzed
    • Since many images produced the same MC values, we consolidate into 49 observations
    • MC’s either occur or they do not {0, 1}
    • Each AU contains anywhere from 1 to 4 MC’s that suggest the AU is occurring.
  • Model Results
    • 49 observations out of which 7 are conflicting so we deleted them
    • 42 observations against the 8 concept definitions in IP through CPLEX
    • Objective Value = 291.50
    • Solution Time = 0.13 seconds
    • Of the 42 observations, all 42 were classified correctly!
multi class svm classification model

42 Obs.

Training Set

Experiment Set

27 Obs.

15 Obs.

Multi-Class SVM – Classification Model

Using “SVM Light” (Joachims, T. SVM-Light Multi-Class, 2007. Cornell U.)

  • Output in Table:
  • Value = # correct (of 15)
  • Time = training time (if > 1.0 sec)
conceptual spaces classification model1
Conceptual Spaces – Classification Model

Used the averages of the xi1 and xi2 values to “train” the concepts below.

svm s v conceptual spaces
SVM’s v. Conceptual Spaces
  • These should be used under different conditions.
    • SVM’s – No a priori knowledge, but trainable data is available
    • Conceptual Spaces – A priori knowledge available, no need to train
multi class svm classification model1

42 Obs.

Training Set

Experiment Set

42 Obs.

42 Obs.

Multi-Class SVM – Classification Model

Using “SVM Light” (Joachims, T. SVM-Light Multi-Class, 2007. Cornell U.)

  • Output in Table:
  • Value = # correct (of 42)
  • Time = training time (if > 1.0 sec)
svm s v conceptual spaces1
SVM’s v. Conceptual Spaces
  • These should be used under different conditions.
    • SVM’s – No a priori knowledge, but trainable data is available
    • Conceptual Spaces – A priori knowledge available, no need to train
cs v svm testing observation dimensionality
CS v. SVM Testing (Observation Dimensionality)

For SVM’s, use 3rd deg. Polynomial and c = 1000

cs v svm testing concept dimensionality
CS v. SVM Testing (Concept Dimensionality)

For SVM’s, use 3rd deg. Polynomial and c = 1000

cpof icon example project overview command post of the future
CPOF ICON Example: Project Overview(Command Post of the Future)

Key

= Input

= Output

A B2 C D3 E

AeroText/ Java Class Creation

Text

Field Soldier

TOC Operator/Field Soldier

Speech Recognition Software

Known Event

Unknown Event

INFERD

[A B C D E] 40%

B2 D2

D3

Domain Library

Event Report

INCIDENT ICON

Conceptual Spaces Algorithm

Filter

[A B2 C D3 E]

cpof event icons
CPOF Event Icons

Bomb

Drive-by Shooting

Explosion

Grenade Attack

IED (Improvised Explosive Device)

Mortar Attack

Murder

8. Point of Impact

9. RPG (Rocket Propelled Grenade)

10. Sniping

11. VBIED (Vehicle-Borne IED)

12. PBIED (Person-Borne IED)

slide29

date = 091349

  • event = direct fire
  • icon =
  • Attributes
  • affiliation = hostile
  • target = US
  • weapon class = light
  • WIA=1
  • 5. confidence = .85
  • 6. Translated text

people = Wayne Demarol

people = enemy

number = 5

place = 38SMB4284890215

place = 38.889556, – 77.0352546

organization = cinc acf a

organization = fallujah toc

event = injury

number = 1

event = pursuit

event = shooting

record = firearm

item = pistol

event = deploy medical personnel

Context: time = 1349; date = 05092005

Process Flow

“Shark 6, this is Oscar Two Delta. Contact Left. Over“

Oscar Two Delta, this is Shark 6, over

"Location - Mike Rome 05742371, over“

Roger, Over

One WIA from pistol shot, estimate enemy force of 5, in pursuit, over

Heading south from CP1 on route 7 at high speed”

Roger, 1 WIA. Over

Request QRF to location 38 SMB xxxxxyyyyy.

Roger

Request immediate medevac at Checkpoint 2.

Roger, deploying medical personnel. Over.

Fuzzy Context Search

  • In situ database
  • Communications Electronics
  • Operations Instructions (CEOI)
  • Patrol Orders
  • Intelligence Preparation of the Battlefield
  • Call signs/code names
  • Channels
  • Location
  • Organizational constructs

Representative

speech-to-text

output,

including

confidence score

Infuse implicit information

Shark 6 = Fallujah TOC

Oscar 2 D = CINC ACF A (Lt. Wayne Demerol’s unit, 5 men)

Mike Romeo 05742371 = Grid 38SMB4284890215

38 SMB xxxxxyyyyy = lat/long surface marker buoy

CP1 = Grid abcdefg, temporary checkpoint building

WIA = wounded in Action

QRF = quick reaction force

Remove extraneous (Over, roger, swear words)

Mission background in situ DB

Past

Spot/SIGACT

Reports

ICON DB

  • Created SPOT report

Extract entities, events and relationships

and Context

Fuzzy Matching

of attributes and events;

Confidence

level;

Icon creation

finishing the example cont
Finishing the Example (cont.)

Most Likely Icon at End of Time Four

EVENT REPORT:

Weapon: Gun

Personnel: Group

Event: Ambush

Conceptual

Spaces

Algorithm

Sniping

EVENT REPORT:

Weapon: Bomb

Personnel: None

Event: Explosion

Bomb

EVENT REPORT:

Weapon: Gun

Personnel: Group

Event: Skirmish

Sniping

EVENT REPORT:

Weapon:

Personnel: Vehicle

Event:

VBIED