1 / 68

Review of Selected Surface Graphics Topics (1)

Review of Selected Surface Graphics Topics (1). Jian Huang, CS 594, Spring 2002. Visible Light. 3-Component Color. The de facto representation of color on screen display is RGB. Some printers use CMY(K) Why? The color spectrum can not be represented by 3 basis functions?

step
Download Presentation

Review of Selected Surface Graphics Topics (1)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Review of Selected Surface Graphics Topics (1) Jian Huang, CS 594, Spring 2002

  2. Visible Light

  3. 3-Component Color • The de facto representation of color on screen display is RGB. • Some printers use CMY(K) • Why? • The color spectrum can not be represented by 3 basis functions? • But the cone receptors in human eye are of 3 types, roughly sensitive to 430nm, 560nm, and 610nm

  4. RGB • The de facto standard

  5. Raster Displays • Display synchronized with CRT sweep • Special memory for screen update • Pixels are the discrete elements displayed • Generally, updates are visible

  6. Polygon Mesh • Set of surface polygons that enclose an object interior, polygon mesh • De facto: triangles, triangle mesh.

  7. Representing Polygon Mesh • Vertex coordinates list, polygon table and (maybe) edge table • Auxiliary: • Per vertex normal • Neighborhood information, arranged with regard to vertices and edges

  8. Object Space World Space Eye Space Clipping Space Screen Space Viewing Pipeline Canonical view volume • Object space: coordinate space where each component is defined • World space: all components put together into the same 3D scene via affine transformation. (camera, lighting defined in this space) • Eye space: camera at the origin, view direction coincides with the z axis. Hither and Yon planes perpendicular to the z axis • Clipping space: do clipping here. All point is in homogeneous coordinate, i.e., each point is represented by (x,y,z,w) • 3D image space (Canonical view volume): a parallelpipied shape defined by (-1:1,-1:1,0,1). Objects in this space is distorted • Screen space: x and y coordinates are screen pixel coordinates

  9. Spaces: Example Object Space and World Space: Eye-Space:

  10. Spaces: Example • Clip Space: • Image Space:

  11. Homogeneous Coordinates • Matrix/Vector format for translation:

  12. Translation in Homogenous Coordinates • There exists an inverse mapping for each function • There exists an identity mapping

  13. Why these properties are important • when these conditions are shown for any class of functions it can be proven that such a class is closed under composition • i. e. any series of translations can be composed to a single translation.

  14. Rotation in Homogeneous Space The two properties still apply.

  15. Affine Transformation • Property: preserving parallel lines • The coordinates of three corresponding points uniquely determine any Affine Transform!!

  16. Affine Transformations T • Translation • Rotation • Scaling • Shearing

  17. Affine Transformation in 3D • Translation • Rotate • Scale • Shear

  18. Viewing • Object space to World space: affine transformation • World space to Eye space: how? • Eye space to Clipping space involves projection and viewing frustum

  19. F P P Pinhole Model • Visibility Cone with apex at observer • Reduce hole to a point - the cone becomes a ray • Pin hole - focal point, eye point or center of projection.

  20. Image W F I World Perspective Projection and Pin Hole Camera • Projection point sees anything on ray through pinhole F • Point W projects along the ray through F to appear at I (intersection of WF with image plane)

  21. Simple Perspective Camera • camera looks along z-axis • focal point is the origin • image plane is parallel to xy-plane at distance d • d is call focal length for historical reason

  22. Y [Y, Z] [(d/Z)Y, d] Z [0, 0] [0, d] Similar Triangles • Similar situation with x-coordinate • Similar Triangles: point [x,y,z] projects to [(d/z)x, (d/z)y, d]

  23. View Volume • Defines visible region of space, pyramid edges are clipping planes • Frustum :truncated pyramid with near and far clipping planes • Near (Hither) plane ? Don’t care about behind the camera • Far (Yon) plane, define field of interest, allows z to be scaled to a limited fixed-point value for z-buffering.

  24. Why do clipping • Clipping is a visibility preprocess. In real-world scenes clipping can remove a substantial percentage of the environment from consideration. • Clipping offers an important optimization

  25. Difficulty • It is difficult to do clipping directly in the viewing frustum

  26. Canonical View Volume • Normalize the viewing frustum to a cube, canonical view volume • Converts perspective frustum to orthographic frustum – perspective transformation

  27. Perspective Transform • The equations alpha = yon/(yon-hither) beta = yon*hither/(hither - yon) s: size of window on the image plane z’ alpha 1 z yon hither

  28. About Perspective Transform • Some properties

  29. Perspective + Projection Matrix • AR: aspect ratio correction, ResX/ResY • s= ResX, • Theta: half view angle, tan(theta) = s/d

  30. Camera Control and Viewing Focal length (d), image size/shape and clipping planes included in perspective transformation • r                Angle or Field of view (FOV) • AR  Aspect Ratio of view-port • Hither, YonNearest and farthest vision limits (WS). Lookat - coi Lookfrom - eye View angle - FOV

  31. Complete Perspective • Specify near and far clipping planes - transform z between znear and zfar on to a fixed range • Specify field-of-view (fov) angle • OpenGL’s glFrustum and gluPerspective do these

  32. More Viewing Parameters • Camera, Eye or Observer: • lookfrom:location of focal point or camera • lookat: point to be centered in image • Camera orientation about the lookat-lookfromaxis • vup: a vector that is pointing straight up in the image. This is like an orientation.

  33. Point Clipping (x, y) is inside iff AND

  34. One Plane At a Time Clipping(a.k.a. Sutherland-Hodgeman Clipping) • The Sutherland-Hodgeman triangle clipping algorithm uses a divide-and-conquer strategy. • Clip a triangle against a single plane. Each of the clipping planes are applied in succession to every triangle. • There is minimal storage requirements for this algorithm, and it is well suited to pipelining. • It is often used in hardware implementations.

  35. Sutherland-HodgmanPolygon Clipping Algorithm • Subproblem: • clip a polygon (input: vertex list) against a single clip edges • output the vertex list(s) for the resulting clipped polygon(s) • Clip against all four planes • generalizes to 3D (6 planes) • generalizes to any convex clip polygon/polyhedron • Used in viewing transforms

  36. Polygon Clipping At Work

  37. With Pictures

  38. 4DPolygonClip • Use Sutherland Hodgman algorithm • Use arrays for input and output lists • There are six planes of course !

  39. 4D Clipping • Point A is inside, Point B is outside. Clip edge AB x = Ax + t(Bx – Ax) y = Ay + t(By – Ay) z = Az + t(Bz – Az) w = Aw + t(Bw – Aw) • Clip boundary: x/w = 1 i.e. (x–w=0); w-x = Aw – Ax + t(Bw – Aw – Bx + Ax) = 0 Solve for t.

  40. Why Homogeneous Clipping • Efficiency/Uniformity: A single clip procedure is typically provided in hardware, optimized for canonical view volume. • The perspective projection canonical view volume can be transformed into a parallel-projection view volume, so the same clipping procedure can be used. • But for this, clipping must be done in homogenous coordinates (and not in 3D). Some transformations can result in negative W : 3D clipping would not work.

  41. Hidden Lines and Surfaces

  42. Hidden Surfaces Removed

  43. y clipped line x 1 y 1 near far clipped line x z 0 1 z image plane near far Where Are We ? • Canonical view volume (3D image space) • Clipping done • division by w • z > 0

  44. Painters Algorithm • Sort objects in depth order • Draw all from Back-to-Front (far-to-near) • Is it so simple?

  45. 3D Cycles • How do we deal with cycles? • Deal with intersections • How do we sort objects that overlap in Z?

  46. Form of the Input Object types: what kind of objects does it handle? • convex vs. non-convex • polygons vs. everything else - smooth curves, non-continuous surfaces, volumetric data

  47. Form of the output Precision: image/object space? • Object Space • Geometry in, geometry out • Independent of image resolution • Followed by scan conversion • Image Space • Geometry in, image out • Visibility only at pixels

  48. Object Space Algorithms • Volume testing – Weiler-Atherton, etc. • input: convex polygons + infinite eye pt • output: visible portions of wireframe edges

  49. Image-space algorithms • Traditional Scan Conversion and Z-buffering • Hierarchical Scan Conversion and Z-buffering • input: any plane-sweepable/plane-boundable objects • preprocessing: none • output: a discrete image of the exact visible set

More Related