1 / 41

Moving Object Detection and Tracking for Intelligent Outdoor Surveillance

Moving Object Detection and Tracking for Intelligent Outdoor Surveillance. Assoc. Prof. Dr. Kanappan Palaniappan palaniappank@missouri.edu Dr. Filiz Bunyak bunyak@missouri.edu Dr. Sumit Nath naths@missouri.edu Department of Computer Science University of Missouri-Columbia.

Download Presentation

Moving Object Detection and Tracking for Intelligent Outdoor Surveillance

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Moving Object Detection and Tracking for Intelligent Outdoor Surveillance Assoc. Prof. Dr. Kanappan Palaniappan palaniappank@missouri.edu Dr. Filiz Bunyak bunyak@missouri.edu Dr. Sumit Nath naths@missouri.edu Department of Computer Science University of Missouri-Columbia

  2. Visual Surveillance and Monitoring • Mounting video cameras is cheap, but finding availablehuman resources to observe the output is expensive. According to study of US Nat’l Institute of Justice: • A person can not pay attention tomore than 4 cameras. • Afteronly 20 minutesof watching and evaluating monitor screens, attention of most individuals falls below acceptable levels. • Although surveillance cameras are already prevalent in banks, stores, and parking lots, video datacurrently is used only "after the fact". What is needed • Continuous 24-hour monitoring of surveillance video toalertsecurity officers to a burglary in progress, or to a suspicious individual loitering in the parking lot, while there is still time topreventthe crime.

  3. Intelligent Surveillance A visual surveillance system combined with visual event detection methods to analyze movements, activities and high level events occurring in an environment. • Event recognition module detects • unusual activities, behaviors, events • based on visual clues. Sends an alarm to operators when a suspicious activity is detected.

  4. Surveillance and Monitoring: Security (parking lots, airports, subway stations, banks, lobbies etc.) Traffic(track vehicle movements and annotate action in traffic scenarios with natural language verbs.) Commercial(understanding customer behavior in stores) Long-Term Analysis (statistics gathering for infrastructure change i.e. crowding measurement) Broadcast Video Indexing: Sports video indexing for newscasters and coaches. Interactive Environments:environment that responds to the activity of occupants Robotic Collaboration:robots that can effectively navigate their environment and interact with other people and robots. Medical: Event based analysis of cell motility Gait analysis, etc. Visual Event Detection Applications

  5. Real Time Alarms Low level alarms: Movement detectors, long term change detectors etc. Feature based spatial alarms: Specific object detection in monitored areas Behavior-related alarms: Anomalous trajectories, agitated behaviors, etc. Complex event alarms: Detection of scenarios related to multiple relational events Long Term and Large Scale Analysis Learning activity patterns of people or vehiclesin a given environment over a long period of time can be used to: retrieve events of interest make projections identify security holes control the traffic or crowd make infrastructure decisions monitor behavior patterns in urban environments Event Types

  6. 1-Analysis: Segmentation of motion blobs (background models, shadow). Object tracking (prediction, correspondence, occlusion resolution etc.) 2-Representation: Video object representations (shape, color descriptors, geometric models). High-level event representations. 3-Access: Efficient data structures for high-dimensional feature space. Efficient and expressive query interface for query manipulation. Issues in High Level Video Analysis

  7. right turn crossroad Visual Event Detection Framework Context Object, Scene & Event Libraries Constraints Object Classification -Objects -Relationships -Events Events Feature Extraction Event Detection Motion Analysis

  8. Controlled Environment versus Far-view Outdoor Surveillance

  9. Moving Object Detection Our Current Capabilities Moving Object Tracking Sudden Illumination Change Detection Moving Cast Shadow Detection/Elimination Trajectory Filtering and Discontinuity Resolution

  10. Our Current Capabilities • Moving Object Detection - Using Mixtureof Gaussians method or Flux tensors • Moving Cast Shadow Elimination • Sudden Illumination Change Detection • Moving Object Tracking – Multi-hypothesis testing using appearance and motion • Trajectory filtering - Temporal consistency check, spatio-temporal cluster check • Discontinuity resolution - Kalman filter, appearance model (color and spatial layout) } Combined photometric invariants

  11. Moving Object Detection Goal: Segment moving regions from the rest of the image (background). Rationale: Provide focus of attention for later processes such as tracking, classification, event detection/recognition.

  12. Background Subtraction • By comparing incoming frames to a reference image (background model), regions in the incoming frame that have significantly changed are located. Frames Feature Extraction Preprocessing Comparison BG/FG Classification Postprocessing BG/FG masks BG model BG Modeling Features -Luminance -Color -Edge maps -Albedo(reflectance) image -Intrinsic images -Region statistics Comparison -Differentiation -Likelihood ratioing Postprocessing -morphological filtering -connectivity analysis -color analysis -edge analysis -shadow elimination • Preprocessing • Spatial smoothing • Temporal smoothing • Color space conversions Classification -Thresholding -Clustering

  13. Problems with Basic Methods

  14. Moved objects:A background object that moved should not be considered part of the foreground forever after. Gradual illumination changesalter the appearance of the background (time of day). Sudden illumination changesalter the appearance of the background (cloud movements). Periodic movement of the background:Background may fluctuate, requiring models which can represent disjoint sets of pixel values (waving trees). Camouflage:A foreground objects' pixel characteristics similar to modeled background. Bootstrapping:A training period absent of foreground objects is not always available. Foreground aperture:When an homogeneously colored object moves, change in interior pixels can not be detected. Sleeping person:When a foreground object becomes motionless it cannot be distinguished from a background. Waking person:When an object initially in the background moves, both the object and the background appear to change. Shadows:Foreground objects' cast shadows appear different than modeled background. Challenging Situations in Moving Object Detection

  15. Background Model • Mixture of Gaussians Model • The recent history of each pixel, X(1),...,X(t), is modeled by a mixture of K Gaussian distributions. • Each distribution is characterized by its • mean μ, • variance σ2, • weight w(indicates what portion of the previous values did get assigned • to this distribution). intensity Color history of the specified pixel Color distribution of the specified pixel

  16. Moved objects √ Gradual illumination changes √ Sudden illumination changes X Periodic movement of the background √X Camouflage X Bootstrapping √ Foreground aperture √ Sleeping person √ Waking person √ Shadows X Since MoG is adaptive & multi-modal, it is robust to: Gradual illumination changes Repetitive motion of the background (such as waving trees) Slow moving objects Introduction and removal of scene objects (sleeping person & waking person problems) when something is allowed to become part of the background, the original background color remains in the mixture until it becomes the least probable and a new color is observed. Performance of Mixture of Gaussians Method

  17. Moving Object Detection using Flux Tensors Color image sequence Thermal image sequence Moving Objects Detected using Flux Tensors Input sequence obtained from OTCBVS Benchmark Dataset Collection http://www.cse.ohio-state.edu/otcbvs-bench/

  18. Shadow Problem creates “new” objects merges separate objects static shadow

  19. New Frame Shadow Detection Normalized Color Comparison FGmask FGmask FGmask Moving Object Detection Identification of Darker Regions Combination Post Processing Reflectance Ratio Comparison Shadow Mask Shadow Mask BG model Shadow Detection by Combined Photometric Invariants for Improved Foreground Segmentation

  20. Combine the Masks Problems with photometric invariants: • An invariant expression may not be unique to a particular material. • There may be singularities and instabilities for particular values. (normalized color is not reliable around black vertex). For a robust result: • Combine results from two invariants based on two different properties • Normalized color : spectral properties. • Reflectance ratio: spatial properties. • At shadow boundaries, same illuminant assumption fails. different reflectance ratios for neighbor pixels misclassification of shadow pixels as foreground dilate shadow mask.

  21. Example: Intelligent Room Sequence Input Image Frame #100 MOG Model #1 MOG Model #2 MOG Model #3 MOG Model #4

  22. Shadow Masks Reflectance Ratio Mask Normalized Color Mask Shadow Mask Post processed shadow mask

  23. Foreground & Shadow Masks Foreground Mask Post Processed Foreground Mask Shadow Mask Post Processed Shadow Mask

  24. Example: Walk-in Sequence Input Frame Walk-in #14 Model 1 Model 2 Model 3 Model 4

  25. Shadow Masks Normalized Color Masks Reflectance Ratio Mask Shadow Mask Shadow Mask Post Processed

  26. Foreground & Shadow Masks ForegroundMask Foreground Mask Post Processed ShadowMask ShadowMask Post Processed

  27. Sudden Illumination Changes(Cloud Movements, Light switch etc.) Sudden illumination changes completely alter the color characteristics of the background, thus increase the deviation of background pixels from the background model in color or intensity based subtraction. Result: • Drastic increase in false detection (in the worst case the whole image appears as foreground). • This makes surveillance under partially cloudy days almost impossible.

  28. Moving Object Tracking Steps: • Predict locations of the current set of objects of interest. • Match predictions to actual measurements. • Update object states. Tracking Moving Object Detection & Feature Extraction Data Association (Correspondence) Update Object States Prediction Context

  29. System state State estimate Measurements Dynamic System Measurement System State Estimator State uncertainties System noise Measurement noise • System Error Source • Agile motion • Distraction/clutter • Occlusion • Changes in lighting • Changes in pose • Shadow (Object or background models are often inadequate or inaccurate) • Measurement • Error Source • Camera noise • Grabber noise • Compression artifacts • Perspective projection • States • Position • Appearance • Color • Shape • Texture etc. • Support map Tracking (as a Dynamic State Estimator)

  30. Detection-based Probabilistic Features Used in Data Association: Proximity Appearance Data Association Strategy: Multi-hypothesis testing Gating Strategies: Absolute and Relative Discontinuity Resolution: Prediction (Kalman filter) Appearance models Filtering: Temporal consistency check Spatio-temporal cluster check Our Tracking Method

  31. Trajectory Filtering • Some artifacts can not be totally removed by image or object level processing. • These artifacts produce spurious segments.

  32. Temporal Consistency Check Source of the Problem: Segments resulting from • Temporarily fragmented parts of an object • Un-eliminated cast shadows Effect: Short segments that split from or merge to a longer segment. Proposed Solution: Pruning short split or merge segments by temporal consistency check. Elimination of short disconnected segments are delayed until after discontinuity resolution.

  33. Spatio-Temporal Cluster Check Source of the Problem: • Repetitive motion of the background (i.e. moving branches or their cast shadows). • Spectral reflections (i.e. reflections from car windshields). Effect: Temporally consistent and spatially clustered trajectories. Proposed Solution: • Average Displacement to Length Ratio (ADLR) • Diagonal to Length Ratio (DLR)

  34. Discontinuity Resolution Discontinuities occur especially in low resolution outdoor sequences. Source of the problem: • Temporarily undetected objects due to • Low contrast • Partial or total occlusions • Incorrect pruning in data association due to significant change in appearance or size caused by • Partial occlusion • Fragmentation

  35. Define source and sink locations where the objects are expected to appear and disappear. Identify Segdis :Segments disappearing unexpectedly (at a non-sink location) -> possible start of a discontinuity. Segapp :Segments appearing unexpectedly (at a non-source location) -> possible end of a discontinuity. Identify possible matches based on time constraint. Use Kalman filter to predict future positions of disappearing and past positions of appearing segments. Check direction and position consistencies on Disappearing segment Appearing segment Joining segment Check Color similarity. Multiple possible matches for a single disappearing segment-> select appearing segment starting earliest. Multiple possible matches for a single appearing segment-> select disappearing segment ending latest. Match-> appearing segment inherits disappearing segment’s label and propagates this new label to its children. Discontinuity Resolution

  36. Shadows -false detections, shape distortions, merges Sudden illumination changes(e.g. due to cloud movements) -difficulty in object detection especially in partly cloudy days Glare from specular surfaces(e.g. car windshields) -spurious detections and trajectory segments Perspective distortion(objects far away from the camera look smaller and appear to move slower) -difficulty in filtering false detections Occlusion -discontinuities in trajectories Poor video quality(low resolution, low color saturation) -difficulty in moving object detection -difficulty in appearance modeling Challenges in Tracking for Visual Event Detection

  37. Some Experimental Results-1 a) Allsegments b) Pruned segments c) Predictions d) After discontinuity resolution

  38. Some Experimental Results-2 a) All segments b) Pruned segments UPS c) Predictions d) After occlusion handling

  39. Some Experimental Results-3 a) All segments b) Pruned segments c) Predictions d) After discontinuity resolution

  40. New moving object detection methods Flux tensor (especially in the presence of global motion, clutter and illumination changes) Weather (i.e. snow, rain, wind) Trajectory analysis Trajectory validation Feature extraction Trajectory annotation Extraction of primitive events based on Trajectory properties Trajectory to trajectory interactions Agent types Complex event detection/recognition through temporal combination of primitive events Hierarchical approach Low-level : probabilistic methods High-level : structural methods Incorporation of learning to event modeling and recognition. Video event mining Potential Collaborations in Visual Event Detection

  41. Questions?

More Related