1 / 27

Computer Graphics: Programming, Problem Solving, and Visual Communication

Computer Graphics: Programming, Problem Solving, and Visual Communication. Steve Cunningham California State University Stanislaus and Grinnell College PowerPoint Instructor’s Resource. The Rendering Pipeline. How the OpenGL system creates the image from your modeling.

jstyron
Download Presentation

Computer Graphics: Programming, Problem Solving, and Visual Communication

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computer Graphics:Programming, Problem Solving, and Visual Communication Steve Cunningham California State University Stanislaus and Grinnell College PowerPoint Instructor’s Resource

  2. The Rendering Pipeline How the OpenGL system creates the image from your modeling

  3. A Different Perspective on Creating Images • Up to this point we have focused on the content of images, without much thought about how they are created • Rendering is the process of taking the descriptions you have provided and setting the appropriate pixels to the appropriate colors to create the actual image

  4. The Rendering Pipeline • Rendering is accomplished by starting with your modeling and applying a sequence of operations that become more and more detailed: a pipeline • This chapter is really about the steps in that pipeline: what they are and how they are done

  5. The Rendering Pipeline (2) • The basic steps in the pipeline are: • You will recognize many of the pieces from earlier chapters

  6. The Rendering Pipeline (3) • The steps can happen in different ways • Software-only • On a graphics card • OpenGL only specifies the process, not the implementation

  7. Two Parts to the Pipeline • The first part of the pipeline is the Geometry Pipeline • This works on the geometry you define and takes it to screen space • The second part of the pipeline is the Rendering Pipeline • This takes the basic geometry in screen space and actually sets all pixels

  8. Model Space • The actual definition of your graphics objects happens in model space when you define • The vertices of your object - glVertex(…) • The way these vertices are grouped - glBegin(…) - glEnd()

  9. Model Space to World Space • The vertices you defined are transformed through the modeling transformation that is currently active, and the results are vertices in world space • Grouping information is passed along • Light position can be affected if this is defined within your modeling

  10. World Space to 3D Eye Space • The viewing transformation is applied to all points in world space in order to transform them into 3D eye space • In OpenGL, the modeling and viewing transformations are combined into the modelview transformation, and this is what is really applied • Grouping is passed along

  11. 3D Eye Space to Screen Space • This is performed by the projection transformation • Much more than geometry is done, however! • The glColor statement or lighting model give the point a color • The z-value in eye space is used to compute a depth • Clipping on the view volume is performed so only visible geometry is preserved • Grouping is passed along

  12. A Point in Screen Space • A point is screen space corresponds to a pixel, but it also has a number of properties that are needed for rendering • Position • Depth • Color - RGB[A] • Texture coordinates • Normal vector

  13. Rendering • To begin the rendering process, we have “pixels with properties” for each vertex of the geometry • The first step is to proceed from vertices to edges by computing the pixels that bound the graphics objects • The edges are determined by the grouping you defined as part of your modeling

  14. Computing Edges • Edges are computed by interpolation • Geometric interpolation, such as the Bresenham algorithm, is used to compute the coordinates of each pixel in the edge • There are rules about edge computation that avoid including pixels that keep from including any pixel in two different edges and that do not include any horizontal edge

  15. Computing Edges (2) • The interpolation is deeper than pixels • The geometry interpolation is extended to calculate the color, depth, texture coordinates, and normal for each edge pixel • If the projection used perspective, this also needs to be used to interpolate depth and texture

  16. Result of Edge Computation • The result of the edge computation is a set of edges for each graphical object • Because OpenGL only works with convex objects, and because of the rules about including pixels in edges, for any horizontal line of pixels there are either zero or two edges that meet this line

  17. Fragments • If there are exactly two edges that meet a horizontal line of pixels, we need to determine the color of all pixels between the two edge pixels on the line • These pixels are called a fragment

  18. Fragments (2) • To determine the color of each pixel, • Interpolate from left to right on the line • For each pixel, • Calculate the depth • Calculate the color (interpolate the color or use the texture) • If depth testing, check the depth in depth buffer • If masking, check against mask • If the pixel passes the depth and mask tests • Perform any blending needed • Write the new color and depth to the color and depth buffers

  19. Some OpenGL Details • The overall OpenGL system model

  20. Some OpenGL Details (2) • Processing for texture maps

  21. Some OpenGL Details (3) • Detail of fragment processing

  22. Programmable Shaders • The OpenGL system model shown here uses a fixed-function pipeline where all the operations are already defined • This is being expanded to a pipeline that lets you define some of the functions by adding programs, called shaders, that you can define • These shaders can be applied as a few specific places in the pipeline

  23. Three Programmable Stages

  24. Geometry Shaders • Geometry shaders work as the primitives (vertices plus groupings) are defined • They will allow you to extend the original geometry with additional vertices or groups

  25. Vertex Shaders • Vertex shaders let you manipulate the individual vertices before they are passed to the rasterization stages

  26. Fragment Shaders • Fragment shaders let you manipulate individual pixels to apply new algorithms to color or select pixels

  27. Shaders Are New… • … and are beyond the scope of a beginning graphics course at this time • If you have a shader-capable graphics card and a shader-capable OpenGL system, it will be interesting for you to experiment with them once your OpenGL skills are solid • We suggest that you use Mike Bailey’s glman system as a learning tool

More Related