1 / 17

A Self-Organizing Approach to Background Subtraction for Visual Surveillance Applications

A Self-Organizing Approach to Background Subtraction for Visual Surveillance Applications. Lucia Maddalena and Alfredo Petrosino , Senior Member, IEEE. Abstract.

torie
Download Presentation

A Self-Organizing Approach to Background Subtraction for Visual Surveillance Applications

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Self-Organizing Approach to Background Subtraction for Visual Surveillance Applications Lucia Maddalena and Alfredo Petrosino, Senior Member, IEEE

  2. Abstract • Detection of moving objects in video streams is the first relevant step of information extraction in many computer vision applications. • Detecting moving objects provides a focus of attention for recognition, classification, and activity analysis. • The proposed approach can handle scenes containing moving backgrounds, gradual illumination variations and camouflage, has no bootstrapping limitations.

  3. Abstract • light changes: should adapt to gradual illumination changes. • moving background: should include changing background that is not of interest for visual surveillance, such as waving trees. • cast shadows: the background model should include the shadow cast by moving objects. • bootstrapping: the background model should be properly set up even in the absence of a complete and static (free of moving objects) training set at the beginning of the sequence. • camouflage: if their chromatic features are similar to those of the background model.

  4. Modeling the background by Self-Organization A. Initial Background Model • In order to represent each weight vector, we choose the HSV color space. Such color space allows us to specify colors in a way that is close to human experience of colors. • Let (h, s, v) be the HSV components of the generic pixel (x, y) of the first sequence frame , and let be the model for pixel (x, y) . Each weight vector is a 3-D vector initialized as =(h, s, v)

  5. Modeling the background by Self-Organization A. Initial Background Model • The complete set of weight vectors for all pixels of an image I with N rows and M columns is represented as a neuronal map A with n*N rows and n*M columns.

  6. Modeling the background by Self-Organization B. Subtraction and Update of the Background Model • Each incoming pixel of the sequence frame is compared to the current pixel model C. If a best matching weight vector is found, it means that belongs to the background. • If isn’t found. is in the shadow cast by some object or not. (shadow information into the background model or foreground)

  7. Modeling the background by Self-Organization B. Subtraction and Update of the Background Model

  8. Modeling the background by Self-Organization

  9. Modeling the background by Self-Organization • If the best match for current pixel is the weight vector , then the weight vectors that are updated according to (3) are weight vectors .

  10. Modeling the background by Self-Organization

  11. Modeling the background by Self-Organization • The basic idea is that a cast shadow darkens the background. where and indicate the hue, saturation, and value components of pixel .

  12. Experimental Results 1) Sequence MSA: • Sequence MSA is consisting of 528 frames of 320*240 spatial resolution, acquired at a frequency of 30 fps.

  13. Experimental Results 2) SequenceWalk1: • SequenceWalk1 is consisting of 611 frames of 384*288 spatial resolution, captured at a frequency of 25 fps.

  14. Experimental Results

  15. Experimental Results Accuracy Metrics: • For measuring accuracy we adopted different metrics, namely Recall (detection rate), Precision (positive prediction), , and Similarity.

  16. Experimental Results

  17. Experimental Results

More Related