1 / 19

GR2 Advanced Computer Graphics AGR

GR2 Advanced Computer Graphics AGR. Lecture 8 Polygon Rendering. The Story So Far. We now understand: how to model objects as a set of polygonal facets and create a 3D world (Lectures 1 & 2) how to view these worlds with a camera model, projecting the facets to 2D (Lectures 3 & 4)

Download Presentation

GR2 Advanced Computer Graphics AGR

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. GR2Advanced Computer GraphicsAGR Lecture 8 Polygon Rendering

  2. The Story So Far • We now understand: • how to model objects as a set of polygonal facets and create a 3D world (Lectures 1 & 2) • how to view these worlds with a camera model, projecting the facets to 2D (Lectures 3 & 4) • how to calculate reflection off a surface (Lectures 5 & 6) • how to shade a single projected facet using the reflection calculation (Lecture 7) • Next step: rendering a set of facets

  3. First a Word on Normals • A polygon has two normals If the polygon is part of a solid object, one normal will face out, one will face in. We need to have a way of distinguishing them.

  4. P3 P2 P4 P1 Surface Normals • Each polygon facet is considered to have an inside and an outside, and a single normal • This is determined by the order in which vertices of facet are specified: • look at object from outside • if polygon vertices are specified in anti-clockwise order, then normal points from inside to outside

  5. Rendering Polygons • We are now ready to consider rendering a set of polygon facets • For efficiency, we only want to render those that are visible to the camera

  6. Back Face Culling • If the facets belong to a solid object (a polyhedron) we do not need to render back-facing polygons Here only three facets need to be drawn - those that face towards the camera

  7. Back Face Culling • A polygon faces away from the viewer if the angle between the surface normal (N) and the viewing direction (V) is less than 90 degrees V.N > 0 N V camera

  8. Back Face Culling • It is efficient to carry this out in the viewing coordinate system • camera on z-axis pointing in negative z-direction • so V = (0,0,-1) • Thus the V.N>0 test becomes a test only on z-component of normal vector Nz < 0 • ie test if z-component of normal points in negative z-direction

  9. Back Face Culling • Back face culling is an extremely important efficiency gain in rendering and is typically the first step in visibility processing • We are left with a set of front facing polygons...

  10. The Next Problem • Some facets will be obscured by others - we only want to draw (ie shade) the visible polygons

  11. Solution - Z Buffer Algorithm • Suppose polygons have been passed through the projection transformation, with the z-co-ordinate retained (ie the depth information) - suppose z normalized to range 0 to 1 For each pixel (x,y), we want to draw the polygon nearest the camera, ie largest z y x z view plane window 0 1 camera

  12. Z Buffer Algorithm • We require two buffers: • frame buffer to hold colour of each pixel (in terms of RGB) ... typically 24 bits • z-buffer to hold depth information for each pixel ... typically 32 bits • Initialize: • frame buffer to the background colour of the scene colour (x,y) = (IRED, IGREEN, IBLUE)background • z-buffer to zero (back clipping plane) depth (x,y) = 0

  13. Z Buffer Algorithm • As each polygon is scan converted and shaded using Gouraud or Phong shading: • calculate depth z for each pixel (x,y) in polygon • if z > depth(x,y), then set: depth (x,y) = z; colour (x,y) = (IRED, IGREEN, IBLUE)gouraud/phong • After all polygons processed, depth buffer contains depth of visible surfaces, and frame buffer the colour of these surfaces

  14. Z Buffer - Strengths and Weaknesses • A major advantage of the z-buffer algorithm is its simplicity • A weakness (of now decreasing importance) is the amount of memory required • Limited precision for depth calculations in complex scenes (perspective effect again a problem)

  15. Transparency • Polygons in practice may be opaque or semi-transparent • in OpenGL a=1 represents opaque • Simple rendering: • render opaque polygons first, generating colour (x,y) • for each semi-transparent polygon (with opacity a, render into another buffer as polygon (x,y) • and combine using: ( 1 - a ) * colour (x,y) + a * polygon (x,y)

  16. Better Transparency • Better results by storing for each pixel the depth and transparency of each surface • Surfaces can then be composited back to front in order to give more accurate images

  17. Shadows • Z buffers also give us a nice way of doing shadows • The z buffer is a way of determining what is visible to the camera • For shadows, we need a way of determining what is visible to the light source

  18. Shadow Z Buffer • We require a second z-buffer, called a shadow z-buffer • Two step algorithm: • scene is ‘rendered’ from the light source as viewpoint, with depth information stored in the shadow z-buffer (no need to calculate intensities) • scene is rendered from the camera position, using Gouraud or Phong shading with a z-buffer algorithm ... but we need to adjust colour if point is in shadow

  19. Shadow Z Buffer • To determine if point is in shadow: • take its position (xO, yO, zO) in the camera view, and transform it into the corresponding position (xO’, yO’, zO’) in the light source view • look up the z value, say zL , in the shadow z-buffer at the position (xO’, yO’) • if zL is closer to the light than zO’, this means some object is nearer the light and therefore the point is in shadow... in this case only the ambient reflection would be shown at that point

More Related