1 / 31

OPTICS AND DEPTH BY NEEL KAMAT DEPARTMENT OF ELECTRICAL ENGINEERING UNIVERSITY OF TEXAS AT ARLINGTON

OPTICS AND DEPTH BY NEEL KAMAT DEPARTMENT OF ELECTRICAL ENGINEERING UNIVERSITY OF TEXAS AT ARLINGTON. WHAT IS OPTICS . HOW IS IT RELATED TO COMPUTER VISION.

jenny
Download Presentation

OPTICS AND DEPTH BY NEEL KAMAT DEPARTMENT OF ELECTRICAL ENGINEERING UNIVERSITY OF TEXAS AT ARLINGTON

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. OPTICS AND DEPTHBYNEEL KAMATDEPARTMENT OF ELECTRICAL ENGINEERINGUNIVERSITY OF TEXAS AT ARLINGTON

  2. WHAT IS OPTICS HOW IS IT RELATED TO COMPUTER VISION Computer vision deals with thorough understanding of process in which electromagnetic radiation is reflected by surface of objects and finally measured by image sensors to produce image data. The branch of physics which deals with light

  3. SIGNIFICANCE OF CAMERA First step of any vision system is the image acquisition system. Role of camera in the image acquisition system is analogous to that of human eye.

  4. PIN HOLE CAMERA MODEL Simplest model to describe camera function. Consists of light-proof box, some sort of film or translucent plate and a pinhole. Principle: Each point on surface of illuminated object reflects ray of light in all directions. The hole lets through certain number of rays which eventually hit the projection plane where they produce the inverted image

  5. PIN HOLE CAMERA MODEL (contd..)

  6. PIN HOLE CAMERA MODEL (contd…) f = Distance between image plane and centre of propagation (x’, y’) = Image co-ordinates (x, y, z) = Object co-ordinates x’ = f x and y’ = f y z z

  7. PIN HOLE CAMERA MODEL DRAWBACKS • The film should be exposed for long time since the pinhole allows only little light through. • Not recommended for moving objects • Omits the effect of depth

  8. DEPTH OF FIELD Distance between near and farthest object that appear to be in acceptably sharp focus in photograph Depends on : 1. Size of aperture 2. Distance of camera from object 3. Focal length of lens

  9. CIRCLE OF CONFUSION Yellow dot is in focus so appears sharp on image plane. Smaller the size of circle of confusion more would be sharpness of image.

  10. SOME BASIC DEFINITIONS Disparity : The amount of difference in location between objects in left and right image pair is called disparity 2. Conjugate pairs : The pair of points in two different images that are the projections of same point of the scene are called conjugate pair.

  11. STEREO IMAGING Most common method for extracting depth information from images In this method a pair of images are acquired using two cameras displaced from each other by a known distance.

  12. SIMPLE SETUP FOR STEREO IMAGING

  13. STEREO IMAGING (contd…) A scene point P is observed at points pland p r in the left and right image planes, respectively. Comparing similar triangles PMCl and plLCl, we get

  14. STEREO IMAGING (contd…) Similarly ,from the similar triangles PNCr and prRCrwe get. Combing the two equations we get

  15. PROBLEM WITH STEREO IMAGING Impractical for far away objects. Disparity is proportional to the camera separation d. However as the camera separation becomes large difficulties arise in correlating the two camera images. it becomes increasingly difficult to match corresponding points in the images. This problem is known as the stereo correspondence problem.

  16. STEREO MATCHING Intensity based approach – matching process directly applied to intensity profiles of images Feature based approach – features are first extracted from the image and then matching process is applied to features

  17. INTENSITY-BASED STEREO MATCHING The epipolar lines coincide with the horizontal scanlines if the cameras are parallel, the corresponding points in both images must therefore lie on the same horizontal scanline. The similarity between the one-dimensional intensity profiles of the two images suggests an optimization process would be suitable. Barn defined the energy function as

  18. INTENSITY-BASED STEREO MATCHING Alternative approach is window based approach Match those regions in the images that are ``interesting'‘ After the interesting regions are detected, a simple correlation scheme is applied in the matching process; a match is assigned to regions that are highly correlated in the two images

  19. FEATURE-BASED STEREO MATCHING EDGE ELEMENTS attributes of edge elements used for matching can be: coordinates (location in the image), local orientations (or directions), local intensity profile on either side of the edge element CORNERS Attributes of corners that can be used for matching: coordinates of corners, type of junctions that the corners correspond to LINE SEGMENTS Attributes of line segments that can be used for matching: coordinates of end-points, mid-points, orientation of line segments. CURVE SEGMENTS, CIRCLES AND ELLIPSES

  20. MATCHING CONSTRAINTS Similarity - For the intensity-based approach, the matching pixels must have similar intensity values. For the feature-based approach, the matching features must have similar attribute values Uniqueness - Almost always, a given pixel or feature from one image can match no more than one pixel or feature from the other image Ordering - If m <-> m’ and n <-> n’ and if m is to the left of n then m' should also be to the left of n' and vice versa.

  21. COARSE-TO-FINE MULTIRESOLUTION MATCHING Generate a pair of image pyramids (a hierarchy of images) from the original image pair so that only few and prominent features are present at the coarse levels. The original images are at the finest level of the image pyramids. Start the matching process at the coarsest level. Use the matches obtained at the coarser level to guide the matching process gradually up to the finest level.

  22. SHAPE FROM X The Shape from X techniques estimate local surface orientation rather than absolute depth at each point. If the actual depth at one point is known, then the depth at other points on the same object can be computed by integrating the local surface orientation. Hence these methods are called indirect methods. Some of the methods : a. Photometric stereo d. Shape from Focus b. Shape from Shading c. Shape from Texture

  23. PHOTOMETRIC STEREO Three images of the same scene are obtained using light sources from three different directions. Both camera and the object have to be stationary during image acquisition By knowing the surface reflectance properties of the objects in the scene, the local surface orientation at points illuminated by all three light sources can be computed.

  24. SHAPE FROM SHADING This method exploits the changes in the image intensity (Shading) to recover surface shape information. This is done by calculating the orientation of scene surface corresponding to each point in the image

  25. SHAPE FROM SHADING (contd…)

  26. SHAPE FROM TEXTURE: Image plane variations in the texture properties such as density, size and orientation are cues that are exploited in shape from texture algorithms.

  27. SHAPE FROM FOCUS Due to finite depth of field of optical systems, only objects which are at a proper distance appear focused in the image whereas those at other depths are blurred in proportion to their difference. Algorithms exploit this blurring effect.

  28. THANK YOU

More Related