1 / 16

Reading Notes: Special Issue on Distributed Smart Cameras, Proceedings of the IEEE

Reading Notes: Special Issue on Distributed Smart Cameras, Proceedings of the IEEE. Mahmut Karakaya Graduate Student Electrical Engineering and Computer Science University of Tennessee, Knoxville Email: mkarakay@utk.edu. Outline.

Download Presentation

Reading Notes: Special Issue on Distributed Smart Cameras, Proceedings of the IEEE

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Reading Notes: Special Issue on Distributed Smart Cameras, Proceedings of the IEEE Mahmut Karakaya Graduate Student Electrical Engineering and Computer Science University of Tennessee, Knoxville Email: mkarakay@utk.edu

  2. Outline • Overview of papers from Proceedings of the IEEE, 96(10), October 2008 • “An Introduction to Distributed Smart Cameras” by Rinner and Wolf. • “Object Detection, Tracking and Recognition for MSCs” by Sankaranarayanan, Veeraraghavan, Chellappa. • “Calibrating Distributed Camera Networks” by Devarajan, Cheng, Radke • Introduction to Visual Sensor Networks (VSNs) • Limitations of Visual Sensor Platforms • Applications of VSN • Challenges in Visual Sensors • Trends in VSN • Basic Concepts for Target Detection, Tracking and Recognition • Camera Calibration with Belief Propagation

  3. Introduction to Visual Sensor Networks What is MSP? What is VSN? • Visual sensing • Computing • Wireless comm. • Collaborative visual computing • Centralized • Distributed • Collaboration in VSN: • To compensate for the limitations of each sensor node, • To improve the accuracy and robustness of the sensor network. * Figure Courtesy of [1]

  4. Limitations of Visual Sensor Platforms SMALL SIZE, LOW POWER, LOW-COST = • Low-processing speed • Microprocessors and FPGA • Small memory space • Huge data volume i.e. image/video • Bottleneck of system performance • Low-bandwidth communication, • Low level comm. protocols, ZigBee. • $(Communication) > 1000x$(Computation) • Scarce energy sources. • Powered by battery.

  5. Applications • Intelligent Video Surveillance Systems: Detect abnormal behaviors in the observed scene. • Intelligent Transportation Systems: Traffic monitoring, intelligent automobiles (inside and outside of the vehicle). • Medicine: Monitor patients, to see live retinal images during surgery. • Entertainment and Smart Environments: Gesture recognition.

  6. Challenges in Visual Sensors • Limited Field of View (FOV) • Directional sensing • Angle of view of normal lens 25o-50o. • Visual Occlusion: • Static and Dynamic • Capture a target only when • It stands in the field of view, • No other occluding targets * Figure Courtesy of [1]

  7. Trends in VSN • From Static to Dynamic and Adaptive: • To execute adaptive algorithms in order to better account for changes in the observed scene, • To exploit static and mobile cameras such as PTZ cameras, • To change camera functionality to provide increased autonomy. • From Small to Very Large Camera Sets: • Consider new types of information that to extract from the cameras. • From Vision-Only to Multi-sensor Systems: • Integrate different sensors i.e. audio, seismic, thermal • Exploits the distinct characteristics of the individual sensors resulting in an enhanced overall output.

  8. Distributed Computing & Distributed Camera Networks • Distributed computing: a network of processors, not have direct knowledge of the state of other nodes. • Distributed algorithms are designed to minimize the number of messages required to complete the algorithm. • Processing all the data centrally poses several problems because of the data volume so process it locally before transmission. • Data only go to the necessary nodes. Partition the network, so that use bandwidth efficiently. • Real-time considerations: A distributed system can make sure only relevant nodes are involved in a given decision.

  9. Basic Concepts for Target Detection and Tracking in VSN • Central projection: Map the targets from real-world to image plane by using the projection matrix. • Epipolar Geometry: Epipolar lines between focal point of two cameras and a pixel-point object are used to obtain correspondence across multiple views. • Triangulation: The correspondences between multiple views is be utilized to localize targets. * Figure Courtesy of [2]

  10. Features Extraction for Target Recognition in VSN • Global properties: • i.e. shape, color, texture • Very sensitive to external conditions such as lighting, pose, viewpoint etc. • Local features: • i.e. discriminative points • Discriminative points are chosen by using feature detector methods i.e. Harris corner detector and scale-invariant feature transform (SIFT). • The object is defined by using these local descriptors which are very robust to viewpoint and illumination.

  11. Geometry of the Camera & Parameters • Typically, a camera is described by two sets of parameters: • The Internal parameters • The focal length • Position of principal points • The skew. • The external parameters describe the placement of the camera in a world coordinate system using • The rotation matrix and • The translation vector. * Figure Courtesy of Satya Prakash Mallick

  12. Neighborhood Clustering Figure: (a) A snapshot of the instantaneous state of a camera network, indicating the fields of view of ten cameras. (b) The associated communication graph. (c) The associated vision graph Vision Graph: Detect the feature points in the image, matching the detected features through the neighbor nodes in vision graph. * Figure Courtesy of [3]

  13. Feature Point Detection Figure: (a) Original image, with detected feature points overlaid. (b) Top view of the reconstructed 3-D scene and camera configuration for the experiment. Scale-invariant feature transform (SIFT): determines the “blobs” by processing the image at multiple scales with difference of Gaussian (DoG) filter and returning the scale-space extreme points of the result.

  14. Belief Propagation • Each camera node estimates the calibration parameters of neighboring cameras and updates them in a distributed algorithm to reach the accurate calibration. Summary of BP • For each node • Forms a neighborhood cluster, independently • Performs the local calibration, based on common scene points. • Calibrated nodes and scene points are incrementally merged into a common coordinate frame.

  15. Questions THANK YOU…..

  16. References [1] Rinner, B.; Wolf, W. “An Introduction to Distributed Smart Cameras” Proceedings of the IEEE, 96(10), p.1565-1575 October 2008 [2] Sankaranarayanan, A.C.; Veeraraghavan, A.; Chellappa, R. “Object Detection, Tracking and Recognition for Multiple Smart Cameras” Proceedings of the IEEE, 96(10), p.1606-1624, October 2008 [3] Devarajan, D.; Zhaolin Cheng; Radke, R.J. “Calibrating Distributed Camera Networks” Proceedings of the IEEE, 96(10), p.1625-1639, October 2008

More Related