Iiit b computer vision fall 2006 lecture 1 introduction to computer vision
This presentation is the property of its rightful owner.
Sponsored Links
1 / 70

IIIT-B Computer Vision, Fall 2006 Lecture 1 Introduction to Computer Vision PowerPoint PPT Presentation


  • 133 Views
  • Uploaded on
  • Presentation posted in: General

IIIT-B Computer Vision, Fall 2006 Lecture 1 Introduction to Computer Vision. Arvind Lakshmikumar Technology Manager, Sarnoff Corporation Adjunct Faculty, IIIT-B. Course Overview. Introduction to vision Case Studies of Applied Vision Automotive Safety Autonomous Navigation

Download Presentation

IIIT-B Computer Vision, Fall 2006 Lecture 1 Introduction to Computer Vision

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Iiit b computer vision fall 2006 lecture 1 introduction to computer vision

IIIT-B Computer Vision, Fall 2006Lecture 1 Introduction to Computer Vision

Arvind Lakshmikumar

Technology Manager, Sarnoff Corporation

Adjunct Faculty, IIIT-B


Course overview

Course Overview

  • Introduction to vision

  • Case Studies of Applied Vision

    • Automotive Safety

    • Autonomous Navigation

    • Industrial Inspection

    • Medical Imaging

    • Entertainment

  • Image Formation

  • About Cameras

  • Image Processing

  • Geometric Vision

  • Camera Motion

  • Paper readings


Computer graphics

Computer Graphics

Output

Image

Model

Synthetic

Camera

(slides courtesy of Michael Cohen)


Computer vision

Computer Vision

Output

Model

Real Scene

Real Cameras

(slides courtesy of Michael Cohen)


Combined

Combined

Output

Image

Real Scene

Model

Synthetic

Camera

Real Cameras

(slides courtesy of Michael Cohen)


The vision problem

How to infer salient properties

of 3-D world from time-varying

2-D image projection

The Vision Problem

¤ What is salient?

¤ How to deal with loss of information going

from 3-D to 2-D?


Why study computer vision

Why study Computer Vision?

  • Images and movies are everywhere

  • Fast-growing collection of useful applications

    • building representations of the 3D world from pictures

    • automated surveillance (who’s doing what)

    • movie post-processing

    • face finding

  • Various deep and attractive scientific mysteries

    • how does object recognition work?

  • Greater understanding of human vision


Properties of vision

Properties of Vision

  • One can “see the future”

    • Cricketers avoid being hit in the head

      • There’s a reflex --- when the right eye sees something going left, and the left eye sees something going right, move your head fast.

    • Gannets pull their wings back at the last moment

      • Gannets are diving birds; they must steer with their wings, but wings break unless pulled back at the moment of contact.

      • Area of target over rate of change of area gives time to contact.


Properties of vision1

Properties of Vision

  • 3D representations are easily constructed

    • There are many different cues.

    • Useful

      • to humans (avoid bumping into things; planning a grasp; etc.)

      • in computer vision (build models for movies).

    • Cues include

      • multiple views (motion, stereopsis)

      • texture

      • shading


Properties of vision2

Properties of Vision

  • People draw distinctions between what is seen

    • “Object recognition”

    • This could mean “is this a fish or a bicycle?”

    • It could mean “is this George Washington?”

    • It could mean “is this someone I know?”

    • It could mean “is this poisonous or not?”

    • It could mean “is this slippery or not?”

    • It could mean “will this support my weight?”

    • Great mystery

      • How to build programs that can draw useful distinctions based on image properties.


Part i the physics of imaging

Part I: The Physics of Imaging

  • How images are formed

    • Cameras

      • What a camera does

      • How to tell where the camera was

    • Light

      • How to measure light

      • What light does at surfaces

      • How the brightness values we see in cameras are determined

    • Color

      • The underlying mechanisms of color

      • How to describe it and measure it


Part ii early vision in one image

Part II: Early Vision in One Image

  • Representing small patches of image

    • For three reasons

      • We wish to establish correspondence between (say) points in different images, so we need to describe the neighborhood of the points

      • Sharp changes are important in practice --- known as “edges”

      • Representing texture by giving some statistics of the different kinds of small patch present in the texture.

        • Tigers have lots of bars, few spots

        • Leopards are the other way


Representing an image patch

Representing an image patch

  • Filter outputs

    • essentially form a dot-product between a pattern and an image, while shifting the pattern across the image

    • strong response -> image locally looks like the pattern

    • e.g. derivatives measured by filtering with a kernel that looks like a big derivative (bright bar next to dark bar)


Iiit b computer vision fall 2006 lecture 1 introduction to computer vision

Convolve this image

To get this

With this kernel


Texture

Texture

  • Many objects are distinguished by their texture

    • Tigers, cheetahs, grass, trees

  • We represent texture with statistics of filter outputs

    • For tigers, bar filters at a coarse scale respond strongly

    • For cheetahs, spots at the same scale

    • For grass, long narrow bars

    • For the leaves of trees, extended spots

  • Objects with different textures can be segmented

  • The variation in textures is a cue to shape


Part iii early vision in multiple images

Part III: Early Vision in Multiple Images

  • The geometry of multiple views

    • Where could it appear in camera 2 (3, etc.) given it was here in 1 (1 and 2, etc.)?

  • Stereopsis

    • What we know about the world from having 2 eyes

  • Structure from motion

    • What we know about the world from having many eyes

      • or, more commonly, our eyes moving.


Part iv mid level vision

Part IV: Mid-Level Vision

  • Finding coherent structure so as to break the image or movie into big units

    • Segmentation:

      • Breaking images and videos into useful pieces

      • E.g. finding video sequences that correspond to one shot

      • E.g. finding image components that are coherent in internal appearance

    • Tracking:

      • Keeping track of a moving object through a long sequence of views


Part v high level vision geometry

Part V: High Level Vision (Geometry)

  • The relations between object geometry and image geometry

    • Model based vision

      • find the position and orientation of known objects

    • Smooth surfaces and outlines

      • how the outline of a curved object is formed, and what it looks like

    • Aspect graphs

      • how the outline of a curved object moves around as you view it from different directions

    • Range data


Part vi high level vision probabilistic

Part VI: High Level Vision (Probabilistic)

  • Using classifiers and probability to recognize objects

    • Templates and classifiers

      • how to find objects that look the same from view to view with a classifier

    • Relations

      • break up objects into big, simple parts, find the parts with a classifier, and then reason about the relationships between the parts to find the object.

    • Geometric templates from spatial relations

      • extend this trick so that templates are formed from relations between much smaller parts


Applications factory inspection

Applications: Factory Inspection

Cognex’s “CapInspect” system:

Low-level image analysis: Identify edges, regions

Mid-level: Distinguish “cap” from “no cap”

Estimation: What are orientation of cap, height of liquid?


Applications face detection

Applications: Face Detection

courtesy of H. Rowley

How is this like the bottle problem on the previous slide?


Applications text detection recognition

Applications: Text Detection & Recognition

from J. Zhang et al.

Similar to face finding: Where is the text and what does it say?

Viewing at an angle complicates things...


Applications mri interpretation

Applications: MRI Interpretation

from W. Wells et al.

Coronal slice of brain

Segmented white matter


Detection and recognition how

Detection and Recognition: How?

  • Build models of the appearance characteristics (color, texture, etc.) of all objects of interest

  • Detection: Look for areas of image with sufficiently similar appearance to a particular object

  • Recognition: Decide which of several objects is most similar to what we see

  • Segmentation: “Recognize” every pixel


Applications football first down line

Applications: Football First-Down Line

courtesy of Sportvision


Applications virtual advertising

Applications: Virtual Advertising

courtesy of Princeton Video Image


First down line virtual advertising how

First-Down Line, Virtual Advertising: How?

  • Where should message go?

    • Sensors that measure pan, tilt, zoom and focus are attached to calibrated cameras at surveyed positions

    • Knowledge of the 3-D position of the line, advertising rectangle, etc. can be directly translated into where in the image it should appear for a given camera

  • What pixels get painted?

    • Occluding image objects like the ball, players, etc. where the graphic is to be put must be segmented out. These are recognized by being a sufficiently different color from the background at that point. This allows pixel-by-pixel compositing.


Applications inserting computer graphics with a moving camera

Applications: Inserting Computer Graphics with a Moving Camera

Opening titles from the movie “Panic Room”

How does motion complicate things?


Applications inserting computer graphics with a moving camera1

Applications: Inserting Computer Graphics with a Moving Camera

courtesy of 2d3


Cg insertion with a moving camera how

CG Insertion with a Moving Camera: How?

  • This technique is often called matchmove

  • Once again, we need camera calibration, but also information on how the camera is moving—its egomotion. This allows the CG object to correctly move with the real scene, even if we don’t know the 3-D parameters of that scene.

  • Estimating camera motion:

    • Much simpler if we know camera is moving sideways (e.g., some of the “Panic Room” shots), because then the problem is only 2-D

    • For general motions: By identifying and following scene features over the entire length of the shot, we can solve retrospectively for what 3-D camera motion would be consistent with their 2-D image tracks. Must also make sure to ignore independently moving objects like cars and people.


Applications rotoscoping

Applications: Rotoscoping

2d3’s Pixeldust


Applications motion capture

Applications: Motion Capture

Vicon software:

12 cameras, 41 markers for body capture;

6 zoom cameras, 30 markers for face


Applications motion capture without markers

Applications: Motion Capture without Markers

courtesy of C. Bregler

What’s the difference between these two problems?


Motion capture how

Motion Capture: How?

  • Similar to matchmove in that we follow features and estimate underlying motion that explains their tracks

  • Difference is that the motion is not of the camera but rather of the subject (though camera could be moving, too)

    • Face/arm/person has more degrees of freedom than camera flying through space, but still constrained

  • Special markers make feature identification and tracking considerably easier

  • Multiple cameras gather more information


Applications image based modeling

Applications: Image-Based Modeling

courtesy of P. Debevec

Façade project: UC Berkeley Campanile


Image based modeling how

Image-Based Modeling: How?

  • 3-D model constructed from manually-selected line correspondences in images from multiple calibrated cameras

  • Novel views generated by texture-mapping selected images onto model


Applications robotics

Applications: Robotics

Autonomous driving: Lane & vehicle tracking (with radar)


Why is vision interesting

Why is Vision Interesting?

  • Psychology

    • ~ 50% of cerebral cortex is for vision.

    • Vision is how we experience the world.

  • Engineering

    • Want machines to interact with world.

    • Digital images are everywhere.


Vision is inferential light

Vision is inferential: Light

(http://www-bcs.mit.edu/people/adelson/checkershadow_illusion.html)


Vision is inferential light1

Vision is inferential: Light

(http://www-bcs.mit.edu/people/adelson/checkershadow_illusion.html)


Vision is inferential geometry

Vision is Inferential: Geometry


Computer vision1

Computer Vision

  • Inference  Computation

  • Building machines that see

  • Modeling biological perception


Boundary detection local cues

Boundary Detection: Local cues


Boundary detection local cues1

Boundary Detection: Local cues


Boundary detection

Boundary Detection

http://www.robots.ox.ac.uk/~vdg/dynamics.html


Boundary detection1

Boundary Detection

Finding the Corpus Callosum

(G. Hamarneh, T. McInerney, D. Terzopoulos)


Iiit b computer vision fall 2006 lecture 1 introduction to computer vision

(Sharon, Balun, Brandt, Basri)


Texture1

Texture

Photo

Pattern Repeated


Texture2

Texture

Photo

Computer Generated


Tracking

Tracking

(Comaniciu and Meer)


Understanding action

Understanding Action


Tracking and understanding

Tracking and Understanding

(www.brickstream.com)


Tracking1

Tracking


Tracking2

Tracking


Tracking3

Tracking


Tracking4

Tracking


Stereo

Stereo

http://www.magiceye.com/


Stereo1

Stereo

http://www.magiceye.com/


Motion

Motion

Courtesy Yiannis Aloimonos


Motion application

Motion - Application

(www.realviz.com)


Pose determination

Pose Determination

Visually guided surgery


Recognition shading

Recognition - Shading

Lighting affects appearance


Classification

Classification

(Funkhauser, Min, Kazhdan, Chen, Halderman, Dobkin, Jacobs)


Iiit b computer vision fall 2006 lecture 1 introduction to computer vision

Viola and Jones: Real time Face Detection


Modeling algorithms

Modeling + Algorithms

  • Build a simple model of the world

    (eg., flat, uniform intensity).

  • Find provably good algorithms.

  • Experiment on real world.

  • Update model.

    Problem: Too often models are simplistic or intractable.


Bayesian inference

Bayesian inference

  • Bayes law: P(A|B) = P(B|A)*P(A)/P(B).

  • P(world|image) =

    P(image|world)*P(world)/P(image)

  • P(image|world) is computer graphics

    • Geometry of projection.

    • Physics of light and reflection.

  • P(world) means modeling objects in world.

    Leads to statistical/learning approaches.

    Problem: Too often probabilities can’t be known and are invented.


Related fields

Related Fields

  • Graphics. “Vision is inverse graphics”.

  • Visual perception.

  • Neuroscience.

  • AI

  • Learning

  • Math: eg., geometry, stochastic processes.

  • Optimization.


  • Login