moving object detection and tracking for intelligent outdoor surveillance n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Moving Object Detection and Tracking for Intelligent Outdoor Surveillance PowerPoint Presentation
Download Presentation
Moving Object Detection and Tracking for Intelligent Outdoor Surveillance

Loading in 2 Seconds...

play fullscreen
1 / 41

Moving Object Detection and Tracking for Intelligent Outdoor Surveillance - PowerPoint PPT Presentation


  • 130 Views
  • Uploaded on

Moving Object Detection and Tracking for Intelligent Outdoor Surveillance. Assoc. Prof. Dr. Kanappan Palaniappan palaniappank@missouri.edu Dr. Filiz Bunyak bunyak@missouri.edu Dr. Sumit Nath naths@missouri.edu Department of Computer Science University of Missouri-Columbia.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Moving Object Detection and Tracking for Intelligent Outdoor Surveillance' - robin-francis


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
moving object detection and tracking for intelligent outdoor surveillance

Moving Object Detection and Tracking for Intelligent Outdoor Surveillance

Assoc. Prof. Dr. Kanappan Palaniappan palaniappank@missouri.edu

Dr. Filiz Bunyak bunyak@missouri.edu

Dr. Sumit Nath naths@missouri.edu

Department of Computer Science

University of Missouri-Columbia

visual surveillance and monitoring
Visual Surveillance and Monitoring
  • Mounting video cameras is cheap, but finding availablehuman resources to observe the output is expensive.

According to study of US Nat’l Institute of Justice:

      • A person can not pay attention tomore than 4 cameras.
      • Afteronly 20 minutesof watching and evaluating monitor screens, attention of most individuals falls below acceptable levels.
  • Although surveillance cameras are already prevalent in banks, stores, and parking lots, video datacurrently is used only "after the fact".

What is needed

  • Continuous 24-hour monitoring of surveillance video toalertsecurity officers to a burglary in progress, or to a suspicious individual loitering in the parking lot, while there is still time topreventthe crime.
intelligent surveillance
Intelligent Surveillance

A visual surveillance system combined with visual event detection methods to analyze movements, activities and high level events occurring in an environment.

  • Event recognition module detects
  • unusual activities, behaviors, events
  • based on visual clues.

Sends an alarm to operators when a suspicious activity is detected.

visual event detection applications
Surveillance and Monitoring:

Security (parking lots, airports, subway stations, banks, lobbies etc.)

Traffic(track vehicle movements and annotate action in traffic scenarios with natural language verbs.)

Commercial(understanding customer behavior in stores)

Long-Term Analysis (statistics gathering for infrastructure change i.e. crowding measurement)

Broadcast Video Indexing: Sports video indexing for newscasters and coaches.

Interactive Environments:environment that responds to the activity of occupants

Robotic Collaboration:robots that can effectively navigate their environment and interact with other people and robots.

Medical:

Event based analysis of cell motility

Gait analysis, etc.

Visual Event Detection Applications
event types
Real Time Alarms

Low level alarms:

Movement detectors, long term change detectors etc.

Feature based spatial alarms: Specific object detection in monitored areas

Behavior-related alarms: Anomalous trajectories, agitated behaviors, etc.

Complex event alarms: Detection of scenarios related to multiple relational events

Long Term and Large Scale Analysis

Learning activity patterns of people or vehiclesin a given environment over a long period of time can be used to:

retrieve events of interest

make projections

identify security holes

control the traffic or crowd

make infrastructure decisions

monitor behavior patterns in urban environments

Event Types
issues in high level video analysis
1-Analysis:

Segmentation of motion blobs (background models, shadow).

Object tracking (prediction, correspondence, occlusion resolution etc.)

2-Representation:

Video object representations (shape, color descriptors, geometric models).

High-level event representations.

3-Access:

Efficient data structures for high-dimensional feature space.

Efficient and expressive query interface for query manipulation.

Issues in High Level Video Analysis
visual event detection framework

right turn

crossroad

Visual Event Detection Framework

Context

Object, Scene &

Event Libraries

Constraints

Object

Classification

-Objects

-Relationships

-Events

Events

Feature

Extraction

Event

Detection

Motion Analysis

our current capabilities
Moving Object DetectionOur Current Capabilities

Moving Object Tracking

Sudden Illumination

Change Detection

Moving Cast Shadow

Detection/Elimination

Trajectory Filtering and

Discontinuity Resolution

our current capabilities1
Our Current Capabilities
  • Moving Object Detection - Using Mixtureof Gaussians method or Flux tensors
  • Moving Cast Shadow Elimination
  • Sudden Illumination Change Detection
  • Moving Object Tracking – Multi-hypothesis testing using appearance and motion
  • Trajectory filtering - Temporal consistency check, spatio-temporal cluster check
  • Discontinuity resolution - Kalman filter, appearance model (color and spatial layout)

} Combined photometric invariants

moving object detection
Moving Object Detection

Goal: Segment moving regions from the rest of the image (background).

Rationale: Provide focus of attention for later processes such as tracking, classification, event detection/recognition.

background subtraction
Background Subtraction
  • By comparing incoming frames to a reference image (background model), regions in the incoming frame that have significantly changed are located.

Frames

Feature

Extraction

Preprocessing

Comparison

BG/FG

Classification

Postprocessing

BG/FG masks

BG model

BG

Modeling

Features

-Luminance

-Color

-Edge maps

-Albedo(reflectance)

image

-Intrinsic images

-Region statistics

Comparison

-Differentiation

-Likelihood ratioing

Postprocessing

-morphological filtering

-connectivity analysis

-color analysis

-edge analysis

-shadow elimination

  • Preprocessing
  • Spatial smoothing
  • Temporal smoothing
  • Color space conversions

Classification

-Thresholding

-Clustering

challenging situations in moving object detection
Moved objects:A background object that moved should not be considered part of the foreground forever after.

Gradual illumination changesalter the appearance of the background (time of day).

Sudden illumination changesalter the appearance of the background (cloud movements).

Periodic movement of the background:Background may fluctuate, requiring models which can represent disjoint sets of pixel values (waving trees).

Camouflage:A foreground objects' pixel characteristics similar to modeled background.

Bootstrapping:A training period absent of foreground objects is not always available.

Foreground aperture:When an homogeneously colored object moves, change in interior pixels can not be detected.

Sleeping person:When a foreground object becomes motionless it cannot be distinguished from a background.

Waking person:When an object initially in the background moves, both the object and the background appear to change.

Shadows:Foreground objects' cast shadows appear different than modeled background.

Challenging Situations in Moving Object Detection
background model
Background Model
  • Mixture of Gaussians Model
  • The recent history of each pixel, X(1),...,X(t), is modeled by a mixture of K Gaussian distributions.
  • Each distribution is characterized by its
    • mean μ,
    • variance σ2,
    • weight w(indicates what portion of the previous values did get assigned
    • to this distribution).

intensity

Color history of the specified pixel

Color distribution of the specified pixel

performance of mixture of gaussians method
Moved objects √

Gradual illumination changes √

Sudden illumination changes X

Periodic movement of the background √X

Camouflage X

Bootstrapping √

Foreground aperture √

Sleeping person √

Waking person √

Shadows X

Since MoG is adaptive & multi-modal, it is robust to:

Gradual illumination changes

Repetitive motion of the background (such as waving trees)

Slow moving objects

Introduction and removal of scene objects (sleeping person & waking person problems)

when something is allowed to become part of the background, the original background color remains in the mixture until it becomes the least probable and a new color is observed.

Performance of Mixture of Gaussians Method
moving object detection using flux tensors
Moving Object Detection using Flux Tensors

Color image sequence

Thermal image sequence

Moving Objects Detected

using Flux Tensors

Input sequence obtained from OTCBVS Benchmark Dataset Collection

http://www.cse.ohio-state.edu/otcbvs-bench/

shadow problem
Shadow Problem

creates “new”

objects

merges

separate

objects

static shadow

shadow detection by combined photometric invariants for improved foreground segmentation

New Frame

Shadow Detection

Normalized Color

Comparison

FGmask

FGmask

FGmask

Moving Object

Detection

Identification of

Darker Regions

Combination

Post

Processing

Reflectance Ratio

Comparison

Shadow

Mask

Shadow

Mask

BG model

Shadow Detection by Combined Photometric Invariants for Improved Foreground Segmentation
combine the masks
Combine the Masks

Problems with photometric invariants:

  • An invariant expression may not be unique to a particular material.
  • There may be singularities and instabilities for particular values. (normalized color is not reliable around black vertex).

For a robust result:

  • Combine results from two invariants based on two different properties
    • Normalized color : spectral properties.
    • Reflectance ratio: spatial properties.
    • At shadow boundaries, same illuminant assumption fails.

different reflectance ratios for neighbor pixels

misclassification of shadow pixels as foreground

dilate shadow mask.

example intelligent room sequence
Example: Intelligent Room Sequence

Input Image Frame #100

MOG Model #1

MOG Model #2

MOG Model #3

MOG Model #4

shadow masks
Shadow Masks

Reflectance Ratio Mask

Normalized Color Mask

Shadow Mask

Post processed shadow mask

foreground shadow masks
Foreground & Shadow Masks

Foreground Mask

Post Processed Foreground Mask

Shadow Mask

Post Processed Shadow Mask

example walk in sequence
Example: Walk-in Sequence

Input Frame Walk-in #14

Model 1

Model 2

Model 3

Model 4

shadow masks1
Shadow Masks

Normalized Color Masks

Reflectance Ratio Mask

Shadow Mask

Shadow Mask Post Processed

foreground shadow masks1
Foreground & Shadow Masks

ForegroundMask

Foreground Mask Post Processed

ShadowMask

ShadowMask Post Processed

sudden illumination changes cloud movements light switch etc
Sudden Illumination Changes(Cloud Movements, Light switch etc.)

Sudden illumination changes completely alter the color characteristics of the background, thus increase the deviation of background pixels from the background model in color or intensity based subtraction.

Result:

  • Drastic increase in false detection (in the worst case the whole image appears as foreground).
  • This makes surveillance under partially cloudy days almost impossible.
moving object tracking
Moving Object Tracking

Steps:

  • Predict locations of the current set of objects of interest.
  • Match predictions to actual measurements.
  • Update object states.

Tracking

Moving Object

Detection &

Feature Extraction

Data Association

(Correspondence)

Update

Object States

Prediction

Context

tracking as a dynamic state estimator

System

state

State

estimate

Measurements

Dynamic System

Measurement

System

State Estimator

State

uncertainties

System noise

Measurement noise

  • System Error Source
  • Agile motion
  • Distraction/clutter
  • Occlusion
  • Changes in lighting
  • Changes in pose
  • Shadow

(Object or background models

are often inadequate or inaccurate)

  • Measurement
  • Error Source
  • Camera noise
  • Grabber noise
  • Compression artifacts
  • Perspective projection
  • States
  • Position
  • Appearance
    • Color
    • Shape
    • Texture etc.
  • Support map
Tracking (as a Dynamic State Estimator)
our tracking method
Detection-based

Probabilistic

Features Used in Data Association:

Proximity

Appearance

Data Association Strategy: Multi-hypothesis testing

Gating Strategies: Absolute and Relative

Discontinuity Resolution:

Prediction (Kalman filter)

Appearance models

Filtering:

Temporal consistency check

Spatio-temporal cluster check

Our Tracking Method
trajectory filtering
Trajectory Filtering
  • Some artifacts can not be totally removed by image or object level processing.
  • These artifacts produce spurious segments.
temporal consistency check
Temporal Consistency Check

Source of the Problem: Segments resulting from

  • Temporarily fragmented parts of an object
  • Un-eliminated cast shadows

Effect: Short segments that split from or merge to a longer segment.

Proposed Solution: Pruning short split or merge segments by temporal consistency check.

Elimination of short disconnected segments are delayed until after discontinuity resolution.

spatio temporal cluster check
Spatio-Temporal Cluster Check

Source of the Problem:

  • Repetitive motion of the background (i.e. moving branches or their cast shadows).
  • Spectral reflections (i.e. reflections from car windshields).

Effect: Temporally consistent and spatially clustered trajectories.

Proposed Solution:

  • Average Displacement to Length Ratio (ADLR)
  • Diagonal to Length Ratio (DLR)
discontinuity resolution
Discontinuity Resolution

Discontinuities occur especially in low resolution outdoor sequences.

Source of the problem:

  • Temporarily undetected objects due to
    • Low contrast
    • Partial or total occlusions
  • Incorrect pruning in data association due to significant change in appearance or size caused by
    • Partial occlusion
    • Fragmentation
discontinuity resolution1
Define source and sink locations where the objects are expected to appear and disappear.

Identify

Segdis :Segments disappearing unexpectedly (at a non-sink location) -> possible start of a discontinuity.

Segapp :Segments appearing unexpectedly (at a non-source location) -> possible end of a discontinuity.

Identify possible matches based on time constraint.

Use Kalman filter to predict future positions of disappearing and past positions of appearing segments.

Check direction and position consistencies on

Disappearing segment

Appearing segment

Joining segment

Check Color similarity.

Multiple possible matches for a single disappearing segment-> select appearing segment starting earliest.

Multiple possible matches for a single appearing segment-> select disappearing segment ending latest.

Match-> appearing segment inherits disappearing segment’s label and propagates this new label to its children.

Discontinuity Resolution
slide36
Shadows

-false detections, shape distortions, merges

Sudden illumination changes(e.g. due to cloud movements)

-difficulty in object detection especially in partly cloudy days

Glare from specular surfaces(e.g. car windshields)

-spurious detections and trajectory segments

Perspective distortion(objects far away from the camera look smaller and appear to move slower)

-difficulty in filtering false detections

Occlusion

-discontinuities in trajectories

Poor video quality(low resolution, low color saturation)

-difficulty in moving object detection

-difficulty in appearance modeling

Challenges in Tracking

for Visual Event Detection

some experimental results 1
Some Experimental Results-1

a) Allsegments

b) Pruned segments

c) Predictions

d) After discontinuity resolution

some experimental results 2
Some Experimental Results-2

a) All segments

b) Pruned segments

UPS

c) Predictions

d) After occlusion handling

some experimental results 3
Some Experimental Results-3

a) All segments

b) Pruned segments

c) Predictions

d) After discontinuity resolution

potential collaborations in visual event detection
New moving object detection methods

Flux tensor (especially in the presence of global motion, clutter and illumination changes)

Weather (i.e. snow, rain, wind)

Trajectory analysis

Trajectory validation

Feature extraction

Trajectory annotation

Extraction of primitive events based on

Trajectory properties

Trajectory to trajectory interactions

Agent types

Complex event detection/recognition through temporal combination of primitive events

Hierarchical approach

Low-level : probabilistic methods

High-level : structural methods

Incorporation of learning to event modeling and recognition.

Video event mining

Potential Collaborations in Visual Event Detection