1 / 85

Research Activities at Florida State Vision Group

Research Activities at Florida State Vision Group. Xiuwen Liu Florida State Vision Group Department of Computer Science Florida State University http://fsvision.cs.fsu.edu. Group members: Lei Cheng, Donghu Sun, Yunxun Wang, Chris Waring, Qiang Zhang,. Outline.

andriap
Download Presentation

Research Activities at Florida State Vision Group

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Research Activities at Florida State Vision Group Xiuwen Liu Florida State Vision Group Department of Computer Science Florida State University http://fsvision.cs.fsu.edu Group members: Lei Cheng, Donghu Sun, Yunxun Wang, Chris Waring, Qiang Zhang,

  2. Outline • Introduction • What is my research all about? • Some applications of computer vision • How useful are the computer vision techniques? • Samples of my research work • What have I done? • Some of the research projects in my group • What is going on within my group? • Contact information • How to contact me?

  3. Introduction • An image patch represented by hexadecimals

  4. Introduction - continued

  5. Introduction - continued • Fundamental problem in computer vision • Given a matrix of numbers representing an image, or a sequence of images, how to generate a perceptually meaningful description of the matrix? • An image can be a color image, gray level image, or other format such as remote sensing images • A two-dimensional matrix represents a signal image • A three-dimensional matrix represents a sequence of images • A video sequence is a 3-D matrix • A movie is also a 3-D matrix

  6. Introduction - continued • Why do we want to work on this problem? • It is very interesting theoretically • It involves many disciplines to develop a computational model for the problem • It has many practical applications • Internet applications • Movie-making applications • Military applications

  7. Computer Vision Applications • Eye Vision • Developed by Carnegie Mellon • It captures a dynamic event using multiple cameras and it can then synthesize new views • http://www.ri.cmu.edu/events/sb35/tksuperbowl.html

  8. Computer Vision Applications - continued • No hands across America • sponsored by Delco Electronics, AssistWare Technology, and Carnegie Mellon University • Navlab 5 drove from Pittsburgh, PA to San Diego, CA, using the RALPHcomputer program. • The trip was 2849 miles of which 2797 miles were driven automatically with no hands • Which is 98.2%

  9. Computer Vision Applications– continued

  10. Computer Vision Applications– continued

  11. Computer Vision Applications– continued • Military applications • Automated target recognition

  12. Computer Vision Applications– continued

  13. Computer Vision Applications– continued • Extracted hydrographic regions

  14. Computer Vision Applications– continued • Medical image analysis • Characterize different types of tissues in medical images for automated medical image analysis

  15. Computer Vision Applications– continued

  16. Computer Vision Applications– continued • Biometrics • From faces, fingerprints, iris patterns ..... • It has many applications such as ATM withdrawal, credit card managements .....

  17. Computer Vision Applications – cont. • Iris pattern recognition http://www.cl.cam.ac.uk/users/jgd1000/iris_recognition.html • Companies in several countries are now using these algorithms in a variety of products. Information about them can be found on the following websites: • Iridian Technologies, USA • IrisAccess LG Corp, South Korea • IrisPass OKI Electric Industries, Japan • EyeTicket Eyeticket Corporation, USA (ticketless air travel) • NCR CashPoint Machines NCR Corp, UK • Diebold ATMs Diebold Inc., USA • British Telecommunications, UK • The Nationwide Building Society, UK

  18. Computer Vision Applications – cont.

  19. Computer Vision Applications– continued • Content-based image retrieval has been an active research area to meet the needs of searching images on the web in a meaningful way • Color histogram has been widely used

  20. Content-Based Image Retrieval – cont.

  21. Content-Based Image Retrieval – cont. 1st 2nd 3rd 4th 5th Query Image

  22. Vision-Based Image Morphing

  23. Vision-Based Image Morphing - continued

  24. My Research Work in the Last Few Years • Image modeling and synthesis • Low dimensional representations of images for recognition • Analytical probabilistic models of images

  25. Is there a common feature that characterizes all these images perceptually? Image Modeling

  26. Spectral Representation – continued • Given a set of filters, a spectral representation of an image consists of the marginal distributions of the filtered images. Input image Its spectral representation

  27. Partitioning Filters in Frequency and Spatial Domain A filter as a surface Deriving Spectral Representation • Partition of the frequency domain

  28. (a) (b) (c) Deriving Spectral Representation - continued • Learning filters from training images as independent filters

  29. Image Modeling - continued • Image synthesis • Given some feature statistics, how to generate samples from the Julesz ensemble • The main technical difficulty is the dimension of the image space • If the image size is 256x256 and each pixel can have 8 values, there are 865536 different images • Markov chain Monte-Carlo algorithms

  30. Image Synthesis Through Sampling • Given observed feature statistics {H(a)obs}, we associate an energy with any image I as • Then the corresponding Gibbs distribution is • The q(I) can be sampled using a Gibbs sampler or other Markov chain Monte-Carlo algorithms

  31. Texture Synthesis Through Sampling - continued Image Synthesis Algorithm • Compute {Hobs} from an observed texture image • Initialize Isyn as any image, and T as T0 • Repeat Randomly pick a pixel v in Isyn Calculate the conditional probability q(Isyn(v)| Isyn(-v)) Choose new Isyn(v) under q(Isyn(v)| Isyn(-v)) Reduce T gradually • Until E(I) < e

  32. A Texture Synthesis Example Observed image Initial synthesized image

  33. Temperature Image patch Energy Conditional probability A Texture Synthesis Example • Energy and conditional probability of the marked pixel

  34. Average spectral histogram error A Texture Synthesis Example - continued • A white noise image was transformed to a perceptually similar texture by matching the spectral histogram

  35. A Texture Synthesis Example - continued • Synthesized images from different initial conditions

  36. Observed image Synthesized image Texture Synthesis Examples - continued • A random texture image

  37. Texture Synthesis Examples - continued • An image with periodic structures Observed image Synthesized image

  38. Texture Synthesis Examples - continued • A mud image with some animal foot prints Mud image Synthesized image

  39. Texture Synthesis Examples - continued • A random texture image with elements Observed image Synthesized image

  40. Synthesized image Original cheetah skin patch Texture Synthesis Examples - continued • A cheetah skin image

  41. Texture Synthesis Examples - continued • An image consisting of circles Observed image Synthesized image

  42. Texture Synthesis Examples - continued • An image consisting of crosses Observed image Synthesized image

  43. Observed image Synthesized image Texture Synthesis Examples - continued • A pattern with long-range structures

  44. Comparison with Texture Synthesis Method • Example from Heeger and Bergen’s algorithm (1995)* Observed image Heeger and Bergen’s Our result * Implemented by T. F. El-Maraghi, available at http://www.cs.toronto.edu/~tem/2522/texture.html

  45. Comparison with Texture Synthesis Method - continued • Another example from Heeger and Bergen’s algorithm Cross image Heeger and Bergen’s Our result

  46. Low Dimensional Representations of Images for Recognition • In recent years, as a means of dimension reduction, principal component analysis, fisher discriminant analysis, and independent component analysis are widely used in appearance-based recognition • Each object type is represented by a representative set of training images using a linear subspace • A classifier is learned based on the training set • A new image is classified based on its linear representation

  47. Linear Representation • Under the linear representation, an observed image window I is assumed to be generated by a linear combination of K hidden factors : • Under the linear assumption, recovering the representation of given an input is through pseudo inverse, given by:

  48. Linear Subspaces of Images – continued • In the linear representation framework, each pixel is associated with a random variable • A critical assumption is that each pixel needs to correspond to a meaningful event for the subsequent analysis to be meaningful • This assumption, however, is often not valid due to translation, scaling, and other deformations

  49. Spectral Representation for Recognition • To make the assumption valid under some deformations, we propose a spectral representation • We represent each image by the underlying probability under the linear assumption, not the vector given by the projection onto a basis • This is done by breaking the images into roughly independent channels, representing each by its marginal • We then use linear subspaces in the spectral representation space, resulting IPCA, IICA, and IFDA

  50. Comparison of Spaces Through Synthesis • Synthesis using eigen face representations Typical sampleswith identical eigen representations Original Reconstructed

More Related