1 / 59

Image Formation and Parameters (Sections 2.2.4, 2.3.1, 2.3.2, 2.4)

Image Formation and Parameters (Sections 2.2.4, 2.3.1, 2.3.2, 2.4). CS485/685 Computer Vision Prof. Bebis. A simple model of image formation. The scene is illuminated by a single source. The scene reflects radiation towards the camera.

bena
Download Presentation

Image Formation and Parameters (Sections 2.2.4, 2.3.1, 2.3.2, 2.4)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Image Formation and Parameters(Sections 2.2.4, 2.3.1, 2.3.2, 2.4) CS485/685 Computer Vision Prof. Bebis

  2. A simple model of image formation • The scene is illuminated by a single source. • The scene reflects radiation towards the camera. • The camera senses it via chemicals on film or photo-sensors (i.e., CCD cameras)

  3. Image formation process • There are two major parts in image formation: • The geometryof image formation which determines where in the image plane the projection of a point in the scene will be located. • The physics of light which determines the brightness of a point in the image plane as a function of illumination and surface properties.

  4. center of projection image plane Camera geometry • The simplest device to form an image of a 3D scene on a 2D surface is the "pinhole” camera. • Rays of light pass through a "pinhole" and form an inverted image of the object on the image plane. perspective projection

  5. (large aperture) (very small aperture) Diffraction and pinhole optics • If we narrow the pinhole, only a small amount of light is let in. • When light passes through a small aperture, it does not travel in a straight line. • It is scattered in many direction (diffraction - a quantum effect). • If we use a wide pinhole, light from the source spreads across the image (i.e., not properly focused), making it blurry.

  6. Larger apertures allow many rays reflected by the same point to enter the camera needs focusing! Camera optics • In practice, lens is used to duplicate the pinhole geometry without resorting to undesirably small apertures. • Lens are placed in the aperture to focus the bundle of rays from each scene point onto the corresponding point in the image plane – this leads to sharp images!

  7. CCD cameras • An array of tiny solid state cells convert light energy into electrical charge. • Manufactured on chips typically measuring about 1cm x 1cm

  8. Human Eye • The eye functions much like a camera; it’s structure includes: • an aperture (i.e., pupil), a lens, a mechanism for focusing, • and a surface for registering images.

  9. Human Eye (cont’d) • Light enters through the cornea at the front of the eyeball, travels through the lens, and falls on the retina.

  10. Human Eye (cont’d) • The retina contains light sensitive cells that convert light energy into electrical impulses that travel through nerves to the brain. • The brain interprets the electrical signals to form images.

  11. Human Eye (cont’d) • There are two kinds of light-sensitive cells in the eyes: rods and cones. • Cones are responsible for all color vision and are present throughout the retina, but are concentrated toward the center of the field of vision at the back of the retina. • There is a small pit called the fovea where almost all the light sensing cells are cones. • This is the area where most “looking” occurs (the center of the visual field where detail, color sensitivity, and resolution are highest).

  12. Human Eye (cont’d) • The rods are better able to detect movement and provide vision in dim light. • The rods are unable to discern color but are very sensitive at low light levels. • Large amount of light overwhelms them, and they take a long time to “reset” and adapt to the dark again. • It is estimated that once fully adapted to darkness, the rods are 10,000 times more sensitive to light than the cones, making them the primary receptors for night vision.

  13. CCD cameras (cont’d) • The output of a CCD array is a continuous electric signal (video signal) which is generated by scanning the photo-sensors in a given order (e.g., line by line) and reading out their voltages. • The video signal is sent to an electronic device called frame grabber. • The frame grabber digitizes the signal into a 2D, rectangular array N x M of integer values, stored in the frame buffer.

  14. CCD array and frame buffer • In a CCD camera, the physical image plane is the CCD array of n x m rectangular grid of photo-sensors. • The pixel image plane (frame buffer) is an array of N x M integer values (pixels).

  15. CCD array and frame buffer (cont’d) • The position of the same point on the image plane will be different if measured in CCD elements (x, y) or image pixels (xim, yim). (assuming that the origin in both cases is the upper-left corner) (xim, yim) measured in pixels (x, y) measured, for example, in millimeters.

  16. Reference Frames • Five reference frames are needed in general for 3D scene analysis. • Object • World • Camera • Image • Pixel

  17. (1) Object Coordinate Frame • 3D coordinate system: (xb, yb, zb) • Useful for modeling objects (i.e., check if a particular hole is in proper position relative to other holes) • Object coordinates do not change regardless how the object is placed in the scene. Notation: (Xo, Yo, Zo)T

  18. (2) World Coordinate Frame • 3D coordinate system: (xw, yw, zw) • Useful for relating objects in 3D Notation: (Xw, Yw, Zw)T

  19. (3) Camera Coordinate Frame • 3D coordinate system: (xc, yc, zc) • Useful for representing objects with respect to the location of the camera. Notation: (Xc, Yc, Zc)T

  20. (4) Image Plane Coordinate Frame (i.e., CCD plane) • 2D coordinate system: (x f , y f ) • Describes the coordinates of 3D points projected on the image plane. Notation:(x, y)T

  21. (5) Pixel Coordinate Frame • 2D coordinate system: (r, c) • Each pixel in this frame has integer pixel coordinates. y Notation:(xim, yim)T x

  22. Transformations between frames

  23. World and Camera coordinate systems • In general, the world and camera coordinate systems are not aligned. center of projection

  24. Y X Z x y World and Camera coordinate systems (cont’d) • To simplify the derivation of the perspective projection equations, we will make the following assumptions: (1) the center of projection coincides with the origin of the world coordinate system. (2) the camera axis (i.e., optical axis) is aligned with the world’s z-axis.

  25. center of projection World and Camera coordinate systems (cont’d) (3) avoid image inversion by assuming that the image plane is in front of the center of projection. (4) the origin of the image plane is the principal point.

  26. center of projection Terminology • The model consists of a plane (image plane) and a 3D point O (center of projection). • The distance f between the image plane and the center of projection O is the focal length(e.g., the distance between the lens and the CCD array).

  27. center of projection Some terminology • The line through O and perpendicular to the image plane is the optical axis. • The intersection of the optical axis with the image plane is called principal pointor image center. Note: the principal point is not necessarily the image center!

  28. Y X Z The equations of perspective projection

  29. 1 1 1 1/f The equations of perspective projection (cont’d) • Using matrix notation: • Verify the correctness of the above matrix • homogenize using w = Z or

  30. Properties of perspective projection • Many-to-one mapping • The projection of a point is notunique • Any point on the line OP has the same projection

  31. Properties of perspective projection (cont’d) • Scaling/Foreshortening • The distance to an object is inversely proportional to its image size.

  32. Properties of perspective projection (cont’d) • When a line (or surface) is parallel to the image plane, the effect of perspective projection is scaling. • When an line (or surface) is not parallel to the image plane, we use the term foreshortening to describe the effect of projective distortion

  33. Properties of perspective projection (cont’d) • Effect of focal length • As f gets smaller, more points project onto the image plane (wide-angle camera). • As f gets larger, the field of view becomes smaller (more telescopic).

  34. Properties of perspective projection (cont’d) • Lines, distances, angles • Lines in 3D project to lines in 2D. • Distances and angles are notpreserved. • Parallel lines do notin general project to parallel lines due to foreshortening (unless they are parallel to the image plane).

  35. Properties of perspective projection (cont’d) • Vanishing point • Parallel lines in space project perspectively onto lines that on extension intersect at a single point in the image plane called vanishing pointor point at infinity. • The vanishing point of a line depends on the orientation of the line and not on the position of the line. Warning: vanishing points might lie outside of the image plane!

  36. Properties of perspective projection (cont’d) • Alternative definition for vanishing point • The vanishing point of any given line in space is located at the point in the image where a parallel line through the center of projection intersects the image plane.

  37. Properties of perspective projection (cont’d) • Vanishing line • The vanishing points of all the lines that lie on the same plane form the vanishing line. • Also defined by the intersection of a parallel plane through the center of projection with the image plane. vanishing line

  38. Orthographic Projection • The projection of a 3D object onto a plane by a set of parallel rays orthogonal to the image plane. • It is the limit of perspective projection as

  39. Orthographic Projection (cont’d) • Using matrix notation: • Verify the correctness of the above matrix (homogenize using w=1):

  40. Properties of orthographic projection • Parallel lines project to parallel lines. • Size does not change with distance from the camera.

  41. Weak perspective projection • Perspective projection is a non-linear transformation. • We can approximate perspective by scaled orthographic projection (i.e., linear transformation) if: (1) the object lies close to the optical axis. (2) the object’s dimensions are small compared to its average distance from the camera

  42. Weak perspective projection (cont’d) • The term is a scale factor now (e.g., every point is scaled by the same factor!). • Using matrix notation: • Verify the correctness of the above matrix - homogenize using

  43. What assumptions have we made so far? • The equations that we have derived so far are written in the camera reference frame. • These equations are valid only when: (1) all distances are measured in the camera’s reference frame. (2) the image coordinates have their origin at the principal point. • In general, the world and pixel coordinate systems are related by a set of physical parameters such as: • - the position and orientation of the camera • - the focal length of the lens • - the position of the principal point • - the size of the pixels

  44. Types of parameters • Extrinsic camera parameters: the parameters that define the location and orientation of the camera reference frame with respect to a known world reference frame. • Intrinsic camera parameters: the parameters necessary to link the pixel coordinates of an image point with the corresponding coordinates in the camera reference frame

  45. Types of parameters

  46. Extrinsic camera parameters • Describe the transformation between the unknown camera reference frame and the known world reference frame. • Typically, determining these parameters means: (1) find the translation vector between the relative positions of the origins of the two reference frames. (2) find the rotation matrix that brings the corresponding axes of the two frames into alignment (i.e., onto each other)

  47. Extrinsic camera parameters (cont’d) • Using the extrinsic camera parameters, we can find the relation between the coordinates of a point P in world (Pw) and camera (Pc) coordinates:

  48. Extrinsic camera parameters (cont’d) or where RiTcorresponds to the i-th row of the rotation matrix

  49. Intrinsic camera parameters • Characterize the optical, geometric, and digital characteristics of the camera: (1) the perspective projection (focal length f ). (2) the transformation between image plane coordinates and pixel coordinates. (3) the geometric distortion introduced by the optics. (1) From Camera Coordinates to Image Plane Coordinates: (perspective projection)

  50. Intrinsic camera parameters (cont’d) (2) From Image Plane Coordinates to Pixel coordinates (ox, oy) are the coordinates of the principal point e.g., ox= N/2, oy= M/2 if the principal point is the center of the image sx, sycorrespond to the effective size of the pixels in the horizontal and vertical directions (in millimeters)

More Related