1 / 17

Egocentric View Transition for Video Monitoring in a Distributed Camera Network

Egocentric View Transition for Video Monitoring in a Distributed Camera Network. Chairman:Hung -Chi Yang Presenter: Fong- Ren Sie Advisor: Yen-Ting Chen Date: 2013.3.20. Kuan-Wen Chen, Pei- Jyun Lee, and Yi-Ping Hung, Department of Computer Science and Information Engineering

hei
Download Presentation

Egocentric View Transition for Video Monitoring in a Distributed Camera Network

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Egocentric View Transition forVideo Monitoring in a Distributed Camera Network Chairman:Hung-Chi Yang Presenter: Fong-RenSie Advisor: Yen-Ting Chen Date: 2013.3.20 Kuan-Wen Chen, Pei-Jyun Lee, and Yi-Ping Hung, Department of Computer Science and Information Engineering Graduate Institute of Networking and Multimedia, National Taiwan University, Taipei, Taiwan,2011

  2. Outline • Introduction • Methodology • Results • Conclusions • References

  3. Introduction • Multi-camera systems used in video surveillance applications • Airport • Railway security • Traffic monitoring

  4. Introduction • Multi-camera system • Advantage • Can monitor the activities of targets over a large area • Show multiple video streams on display simultaneously

  5. Introduction • Multi-camera system • Disadvantage • To security guards or users using the system, with the number of video streams increasing the difficulty of monitoring increases. • The user needs to understand where the target is in the environment and the geometrical relationship between cameras.

  6. Introduction • Egocentric view transition • Avoid the effect of uncomfortable flash caused by sudden view change • Help users understand the spatial relationships among the target, cameras, and environments easily

  7. Methodology • The basic concept of view transition comes from view morphing • Virtual teleconference system • Sports broadcasting system • Photo browsing and exploring system

  8. Methodology • To monitor multiple cameras, some works embedded video surveillance images in a 3D model by using projective texture mapping to integrate live video streams with the model

  9. Methodology • Multi-camera Tracking • In the overlapping case • Multi-cameratracking is performed by comparing the 3D positions estimated from each camera. • In the non-overlapping case • which tracks targets across non-overlapping cameras based on bothspatio-temporal and appearance cues.

  10. Methodology • Background Texture Adaptation • We calculate the pixel density of texture ratio of real camera to virtual camera by the following equation: • Rr>1 → Paste with that captured by cameras • Rr<1 →use the grid-texture

  11. Conclusions • Egocentric view transition which synthesizes thevirtual views when switching cameras • Overlapping FOVs of cameras. • presented a framework to build a foreground billboard and put it to the 3D model.

  12. Conclusions • Non-overlappingFOVs of cameras. • a better view transition effect • use a particle system to visualize the probability distribution of where the target is in the blind region • rule of setting virtual camera positions and a background texture adaptation method

  13. References • 1. Chen, K.W., Lai, C.C., Hung, Y.P., Chen, C.S.: An Adaptive Learning Method for Target Tracking across Multiple Cameras. In: CVPR (2008) • 2. Debevec, P.E., Taylor, C.J., Malik, J.: Modeling and Rendering Architecture from Photographs: A Hybrid Geometry- and Image-Based Approach. In: SIGGRAPH (1996) • 3. Finke, R.A.: Principles of Mental Imagery. MIT Press, Cambridge (1989) • 4. Girgensohn, A., Kimber, D., Vaughan, J., Yang, T., Shipman, F., Turner, T., Rieffel, E., Wilcox, L., Chen, F., Dunnigan, T.: DOTS: Support for Effective Video Surveillance. In: • MULTIMEDIA (2007) • 5. Hsiao, C.H., Huang, W.C., Chen, K.W., Chang, L.W., Hung, Y.P.: Generating Pictorial-Based Representation of Mental Image for Video Monitoring. In: IUI (2009) • 6. Horprasert, T., Harwood, D., Davi, L.: A Statistical Approach for Real-Time Robust Background Subtraction and Shadow Detection. In: FRAME-RATE Workshop (1999)

  14. References • 7. Haan, G., Scheuer, J., Vries, R., Post, F.H.: Egocentric Navigation for Video Surveillance in 3D Virtual Environments. In: IEEE Symposium on 3D User Interfaces (2009) • 8. Hartley, R.I., Zisserman, A.: Multiple View Geometry, 2nd edn. Cambridge University Press, Cambridge (2004) • 9. Katkere, A., Moezzi, S., Kuramura, D.Y., Kelly, P., Jain, R.: Towards Video-Based Immersive Environments. Multimedia System 5(2), 69–85 (1997) • 10. Kanade, T., Narayanan, P., and Rander, P.: Virtualized Reality: Concept and Early Results. Tech. Rep. CMU-CS-95-153 (1995) • 11. Levenberg, K.: A method for the solution of certain problems in least squares. Quarterly Applied Math. 2, 164–168 (1944) • 12. Lei, B., Hendriks, E.: Real-Time Multi-Step View Reconstruction for a Virtual Teleconference System. EURASIP J. Appl. Signal Process 2002(10), 1067–1088 (2002)

  15. References • 13. Neumann, U., You, S., Hu, J., Jiang, B., Lee, J.: Augmented Virtual Environment (AVE): Dynamic Fusion of Imagery and 3d Models. In: IEEE Virtual Reality (2003) • 14. Ohta, Y., Kitahara, I., Kameda, Y., Ishikawa, H., Koyama, T.: Live 3D Video in Soccer Stadium. IJCV 75(1), 173–187 (2007) • 15. Palmer, S.: Vision Science: Photons to Phenomenology. MIT Press, Cambridge (1999) • 16. Reeves, W.T.: Particle Systems - a Technique for Modeling a Class of Fuzzy Objects. ACM Transactions on Graphics 2, 91–108 (1983) • 17. Sawhney, H.S., Arpa, A., Kumar, R., Samarasekera, S., Aggarwal, M., Hsu, S., Nister, D., Hanna, K.: Video Flashlights: Read Time Rendering of Multiple Videos for Immersive Model Visualization. In: EGRW (2002) • 18. Seitz, S., Dyer, C.: View Morphing. In: SIGGRAPH (1996) • 19. Stauffer, C., Grimson, W.E.L.: Learning Patterns of Activity using Real-Time Tracking. IEEE Transactions on PAMI 22(8), 747–757 (2000)

  16. References • 20. Segal, M., Korobkin, C., Widenfelt, R., Foran, J., Haeberli, P.: Fast Shadows and Lighting Effects Using Texture Mapping. In: SIGGRAPH (1992) • 21. Snavely, N., Seitz, S.M., Szeliski, R.: Photo Tourism: Exploring Photo Collections in 3d. In: SIGGRAPH (2006) • 22. Thorndyke, P., Hayes-Roth, B.: Differences in Spatial Knowledge Acquired from Maps and Navigation. Cognitive Psychology 14(4), 560–589 (1982) • 23. Welch, G., Bishop, G.: An introduction to the kalman filter. Chapel Hill, NC, USA, Tech. Rep. (1995) • 24. Wang, Y., Krum, D.M., Coelho, E.M., Bowman, D.A.: Contextualized Videos: Combining • Videos with Environment Models to Support Situational Understanding. IEEE TVCG 13(6), 1568–1575 (2007) • 25. Zhang, Z.: A Flexible New Technique for Camera Calibration. IEEE Transactions on PAMI 22, 1330–1334 (2000)

  17. Thank you for your attention

More Related