1 / 35

Algorithms for cooperative multisensor surveillance

Algorithms for cooperative multisensor surveillance. Page(1~13/22). IEEE JNL.

eugene
Download Presentation

Algorithms for cooperative multisensor surveillance

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Algorithms for cooperative multisensor surveillance Page(1~13/22)

  2. IEEE JNL • Algorithms for cooperative multisensor surveillanceCollins, R.T.; Lipton, A.J.; Fujiyoshi, H.; Kanade, T.;Proceedings of the IEEEVolume 89,  Issue 10,  Oct. 2001 Page(s):1456 - 1477 Digital Object Identifier 10.1109/5.959341 AbstractPlus | References | Full Text: PDF(624 KB)  | Full Text: HTML  IEEE JNL

  3. Index(1/2) • Introduction • Large project in the world • Multisensor Surveillance challenges • Surveillance Tested Background • Surveillance Tested Preliminary • Architecture • Surveillance Tested • Control station integration all sensors • Surveillance site model

  4. Index(2/2) • Moving object detection • Adaptive background subtraction • Layer detection • Pixel analysis • Layer detection result • Object type classification • Human motion analysis • Distance detection • Error rate measurement overview(1/2)

  5. Introduction • VSAM team in the CMU • CMU ( Carnegie Mellon University) • Old surveillance is “after fact” System • A new system that have facility to prevent the crime. • Automatic detect the unusual event • Alarm the suspicious event • Detect object in the cluttered environment

  6. Large Project in the world • Large research project devoted to video surveillance have been conducted in • US – VSAM (video surveillance and monitoring) • Europe • ESPRIT PASSWORDS • AVS-PV • VIEWS • Japan • The cooperative Distributed Vision Project

  7. Multisensor Surveillance challenges • Actively control sensors • Fuse information to scene-level object • Monitor the event and tigger further processing • GUI for visualization and system tasking • Identify human walking and running

  8. Surveillance Tested Background • Placement of cameras • In the current VSAM • Tested system.

  9. Surveillance Tested preliminary • OCU (operator control unit) • SPU (sensor processing unit) • VIS (visualization nodes)

  10. Architecture

  11. Surveillance Tested

  12. Control station integration all sensors

  13. Surveillance site model • 由已知的背景知識我們可以做 • Computation of object location • Landmark-based calibration of camera exterior orientation • Improve object tracking by predict • Trace the object • Describe the object move route • Classifier object type (describe later) • Visualization • Simulation for planning best sensor placement.

  14. Moving object detection • Temporal differencing • Background subtraction • Optical flow

  15. Adaptive background subtraction • Statistical average of intensity at each pixel • Potentially containing a moving object • Problem : object disappear • Car enter a park and stopped. • The car shouldn’t consider as background. • Its stationary pixels play a role of background as detection motion of a person moving out of the car.

  16. Layer detection • Two processes • Pixel analysis • Determine pixel is stationary or transient • Observe the intensity over time • Moving object have much more intensity change than lighting in bad weather • Region analysis

  17. Pixel analysis

  18. Layer detection result

  19. Object type classification(1/2) • Neural network classifier • Three layer network • Backpropagation • Input features • Image • Bolb area • Camera zoom value • Camera settings • Output features • Single human • Human group • Vehicle and clutter

  20. Object type classification(2/2) • 2 submodules • Classify object shap • Determining the color • LDA • Feature vector • Training examples are mapped into shap space as 11 dimensional feature vector. • Color space is 3 dimensional. • I1 = 10*(R+G+B)/3 • I2 = 100*(R-B)/2 • I3 = 100*(2*G – R – B)/4

  21. Object type classification(2/2) • K-NN • Calculate the most likely class in the LDA space • Object type include • UPS car • Campus police car • Mule: golf-car-like • The reason why system not work well • Raining or snowing • Early morning or late evenings • Backlighting and specular reflection

  22. Object type classification

  23. Human motion analysis • Most algorithm assume that the person’s image is enough to track limbs • “star” skeletonization procedure • Decide the centroid • Traversing the boundary • A person is walking or running? • The frequency of cyclic motion of the leg segment

  24. Human motion analysis

  25. Human motion analysis

  26. Human motion analysis

  27. Human motion analysis

  28. Distance detection

  29. Distance detection

  30. Error rate measurement(1/2)

  31. Error rate measurement(1/2)

  32. Error rate measurement(1/2) • Real world distance is test by the theodolite • Formula to calculate the error rate

  33. thanks

More Related