1 / 27

Graphics

Graphics. Graphics Fundamentals Frame buffer and back buffer Frame buffer : The area of memory where the screen is stored. The computer draws directly to it. However, this means that the user can see the frame being constructed.

love
Download Presentation

Graphics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Graphics Graphics Fundamentals Frame buffer and back buffer Frame buffer: The area of memory where the screen is stored. The computer draws directly to it. However, this means that the user can see the frame being constructed. Back buffer: The game would like to construct the frame in a non-visible area and then show only the finished frame. Rendering is then performed to the hidden frame, i.e. back buffer, and when the frame is finished, everything is displayed in one operation. Two approaches to switch between the two buffers: Switch references to the two buffers Copy back buffer’s memory contents of frame buffer’s memory. Image transformation (e.g. format change, anti-aliasing/filtering) can be done during copying. Visibility and the depth buffer Make sure that only un-obscured objects or parts are visible or to be rendered in the finished scene. Solution 1: not to render objects that are hidden to increase speed. Solution 2: to render objects in a specific sequence, ordering them from far away (“back”) to nearby (“front”). Solution 3: to have a second buffer (Z-buffer) that stores depth values. For each pixel on the back buffer, there is a corresponding pixel in the depth buffer. The value held in the depth buffer is an indication of how far away the corresponding pixel is.

  2. Graphics Graphics Fundamentals Stencil buffer A third buffer paired with the back and depth buffers The stencil buffer does not have a clearly defined role. It holds an arbitrary value to be used to reject pixels to render. Example, 0/1 – non-visible/visible in an irregular shaped open window Triangles The most common primitives to render Useful properties: the simplest primitive that describes a surface in space. it is simple to linearly interpolate values across them. they can be used to construct a number of higher order primitives. Vertices A triangle is defined as the connection of three points in space. These points are called vertices. Each vertex also has a variety of properties that can be fed into the material to determine how that particular vertex, and the triangles that use it, are rendered. Example properties: The vertex at the corner of a cube belong to 3 adjacent triangles. Its property can include the normals of the 3 triangles.

  3. Graphics Graphics Fundamentals Coordination spaces The definition of a coordinate space is usually done by specifying the position of its origin in the parent space as a three-dimensional vector and the directions of its three axes in the parent space. These four vectors are usually shown as the columns of a four-column matrix The most fundamental space is world space where everything happens, where the game is set, and the space in which most coordinates are defined. The next space is object spacein which the vertices of a model are usually defined. Such vertices do not change as an object moves and rotates around the world but the relationship between world space and object space. When rendering an object, the first step is to transform each vertex from the object space in which it is defined into world space so that the renderer knows where it is at that instant relative to other objects.

  4. Graphics Graphics Fundamentals Coordination spaces Camera A scene needs a camera. A camera also has a position and orientation in world space. To actually render a scene, objects must be moved out of world space and into camera space, so that when the camera turns right, the objects on the screen move left. Frustum a certain area in front of a camera that is visible on the screen. defined by 6 planes: the four planes form a pyramid with its tip at the camera, extending out into space. These are four of the planes defining the camera frustum. the near plane and the far plane are parallel to the plane of the screen Clipping Before being rendered, triangles are chopped into two parts: the part inside the edges of the frustum, and the parts outside the edges. The part outside is discarded, and only the part inside is rendered. Clipped triangles are performed into a (wrapped) clip space.

  5. Graphics Graphics Fundamentals Coordination spaces Tangent (surface-local) space: a subspace of object space that follows the surface of a mesh, and each triangle has its own tangent space. Textures A texture is a surface that holds fragments of data called texels. Each texel conventionally holds a red, green, and blue value, and an alpha transparency channel (RGBA) Textures are typically 2D arrays of texels, representing a picture that is mapped onto the object and used for shading. Shaders A shader is a small program used to determine either the shape or the color of a mesh. Materials A material is a description of how to render a triangle. It usually consists of shaders, associated textures, and properties taken from the vertices of the triangles Materials can also include higher level information such as multiple rendering passes.

  6. Graphics Higher Level Organization Interaction between game and render It is recommended that the game logic and the rendering engine be structured so that they may operate at different rates, to accommodate variety of graphics hardware and CPUs. In real games, running the complex game logic at a single fixed rate, while allowing only the graphics engine to run at different speeds, vastly simplifies the process of developing a game. Render objects A render object, or sometimes just object in the context of rendering, represents the renderable description of one game entity. A render object is usually composed of a single animation skeleton and one or more meshes that share the same skeleton or position in space. There is only one of each type of render object. Render object instances A render object instance represents a game object for rendering. Each instance points to a single render object that defines drawing and shape which are common for all instances. Each instance stores other information which are unique for the instance, e.g. position, orientation, animation state, lighting.

  7. Graphics Higher Level Organization Meshes A mesh is a collection of triangles, the vertices those triangles use, and a single material that is used to render all the triangles. A render object may have multiple meshes, which allows it to represent objects with multiple materials. A mesh may share its skeleton and lighting context with other meshes in the same render object, but each will usually have a different material. The number of meshes in a render object heavily influences how fast the object can be rendered. Example: Meshes of a render object for a person face, hands, hair, cloths, hat

  8. Graphics Higher Level Organization Meshes Skeletons Each render object will typically have a single skeleton, which describes how the bones of that object are connected together. Each render object instance will describe the animation state, which describes the current position of those bones. Both render object and instance are orthogonal to meshes—a single skeleton and animation state can be used for rendering multiple meshes.

  9. Graphics Higher Level Organization Render volume partitioning Culling: The process of not drawing instances, as the rendering engine needs to simply avoid rendering the vast majority of the instances in a large world to get reasonable performance. Frustum culling: ignore any instance that is behind the camera or outside its frustum or field of view, or not visible in the frustum. Graph based structure and algorithm for identifying visible, instead of non-visible, render object instances. Instances live in nodes of the graph, and the graph is traversed starting from the node that the camera is in, and going outward until some limit of visibility is reached.

  10. Graphics Higher Level Organization Portals The scene is split into nodes, each occupying a given space, usually defined simply by the geometry it contains. Each node is joined to one or more other nodes by a “portal,” which is usually represented by a planar convex polygon. To find which nodes are visible, the renderer starts the graph at the node that the camera is in, and everything in that node that is inside the camera frustum is drawn. For each portal leading out of this node, the shape of the portal on the screen is found. If the portal is not inside the viewing frustum at all, it is ignored. Any portal that is inside the frustum, and therefore visible, marks the node on the other side of the portal as also visible, and also remembers the screen shape of the portal, which is the only part of the node that will be visible . If multiple portals open onto a given node (a common occurrence), the two shapes are combined.

  11. Graphics Higher Level Organization Binary Space Partitioning (BSP) A BSP is a tree structure, and the entire tree represents all of the game space. Each node of the tree represents a section of space that does not intersect with any of its sibling nodes, and can be further subdivided into child nodes. Leaf node: no children, represent a single area of the space, marked as hollow or solid. Hollow leaf node has render object instance in it. If a hollow leaf node is visible, all the instances in the node will be rendered. A binary node has 2 children and a plane to separate the 2 opposite half’s of the node. Traverse the tree to find the leaf node that contains the camera, and then from there to determine if adjacent nodes are visible.

  12. Graphics Higher Level Organization Quadtrees (2D) and Octrees (3D) In a quadtree, each node represents a square in space aligned with the x- and y-axes, and has either no children (a leaf node) or four equally sized children, cutting the node into quarters. Traverse algorithm Position (3,6) in binary is (0011,0110) Step 1: node (00)=0, new coordinates now (0110,1100) Step 2: node (01)=1, new coordinates now (1100,1000) Step 3: node (11)=3, new coordinates now (1000,0000) Step 4: node (10)=2, which is a leaf node, so this is the node the point is in. Quadtrees are typically used for collision checking and fast frustum culling in outdoor environments. A given node can be checked to see whether it is in the visible frustum. If a parent node is not visible, none of its children nodes are visible.

  13. Graphics Higher Level Organization Potentially visible set (PVS) Each node has a PVS, which is a list of links to other nodes that are potentially visible from that node. If the camera is in a node, the PVS lists all the nodes that may be considered for drawing, but not those which are not in the PVS. What a PVS does provide is an extremely quick way to reject nodes.

  14. Graphics Types of Rendering Primitives Triangle: 3 vertices Line: 2 vertices Point: 1 vertex Quad: 4 vertices, or 2 triangles String sequences of primitives together, supplying an explicit topology.

  15. Graphics Types of Rendering Primitives A number of triangles of a mesh can be drawn using strips before needing to restart

  16. Graphics Types of Rendering Primitives Separate the vertices from the topology of the mesh and supply each separately, with the topology referring to the vertices, not being specified by their ordering. Vertex buffer: A list of vertices which are numbered from zero upward Index buffer: The topology of the mesh, or the triangles, which is a list of numbers, each referring to the vertex with that array index. This list specifies which vertices each triangle uses, and may itself be specified as a strip, fan, or list topology. Advantages: save memory for holding duplicated vertices, make use of vertex caching.

  17. Graphics Textures The rendering primitive defines which pixels in the back buffer will be rendered to but not what colors will be used to render with. The most common way to specify the color of the pixels of a triangle is by mapping a texture onto it. The colors are read out of the texture at each pixel, and used in the lighting algorithms to modify or specify the properties of the surface at that pixel. Texture formats A texture is simply an array of colors, each known as a texel, and the most common shape of the array is a two-dimensional rectangular grid. Textures are stored on disk as common image formats such TGA, .BMP, PNG, and JPEGs. In most cases, a texture is simply a picture that is mapped onto the rendered mesh. A texel has four values—red, green, blue, and alpha (RGBA). The RGB components/channels mean the red, green, and blue components that make up an actual color, and the alpha channel typically represents an opacity value of The ordering of the 4 letters often represents their ordering in memory, and so it may be different for various platforms and formats The meaning of values of RGBA channels are subject to the rendering shaders’ interpretation. A mipmap chain is a sequence of textures (each called a mipmap level), each roughly half the size in each dimension of the previous one, until the final mipmap is just one texel. Each mipmap holds the same “picture” as the previous one, but shrunk and filtered down to the smaller size.

  18. Graphics Textures Texture mapping At rendering time, the hardware must know which texels to read to find the color for given pixel on the screen. Explicit mapping: Each vertex of the mesh supplies a texture coordinate, composed of one to three numbers u, v, and w. By convention, most rendering APIs map the 0.0 to 1.0 range to the entire texture, no matter how large in texels the texture. E.g. (0.5, 0.5) is the middle of the texture. Mapping to be computed by the shader pixel by pixel Wrap/clamp mode of the texture when mapping beyond range (0.0-1.0): wrap, clam, mirror, mirror-once (-1.0 - +1.0), border color. Texture filtering To smooth the sharp edges of the texels, hardware generally picks the nearest few texels to the sampled point and blends them together smoothly with the amount of blending depending on exactly how close to each texel center the sample is taken. Bilinear filtering for enlarge: each sample uses the nearest four texels to construct its color. For shrinking, use mipmap chains

  19. Graphics Textures Rendering to texture Graphics hardware to generate the texture, by rendering triangles to it. One indirect way to do this is to render to the back buffer and then copy the pixels to the texture’s texels with the CPU. The most direct way is to have the graphics hardware render directly to the texture’s memory. This process redirects rendered primitives from affecting the back buffer so that they hit the texture, and is commonly referred to as changing the render target. After the texture has been rendered to, the game changes back to the back buffer and renders the scene as usual. Rendering to textures is a powerful feature, and allows the graphics hardware to composite many partial renders together in interesting ways not normally possible if always rendering to the back buffer.

  20. Graphics Lighting Lighting is an umbrella term for the processes of determining the amount and direction of light incident on a surface, how that light is absorbed, reemitted, and reflected from the surface, and which of those outgoing light rays eventually reach the eye. A renderer is only concerned with the rays that finally reach the eye. Three major approaches for lighting Forward tracing: take every photon emitted by a light source, trace it through the environment, see what it hits, and decide if it is reemitted and in which directions. Then ignore all photons that do not eventually hit the eye. Expensive and slow. Backward tracing: trace a hypothetical photon that hits the eye backward from the eye in a particular direction, see which object it came from, and then see the range of places it could have come from, and if so, what their color values would be. Raytracing/Radiosity. Expensive and slow. Middle-out evaluation: “Given this bit of surface, how much light came from these sources of light and ended up hitting the eye?”

  21. Graphics Lighting Components Data parts What lights are shining on the surface? This data is held by the scene in the form of various representations of lights and lighting environments. How does the surface interact with the incoming (or incident) light? This data is held in the material structure of the mesh, the shader code it is composed of, and together with data such as textures and various numerical material values. What part of the result of these interactions is visible to the eye? The data required to resolve this is usually simply represented by the vector pointing from the point on the surface toward the camera. Taken together, these three parts, the position of the eye or camera, the position of the lights in the scene, and the material description all combine to determine the total lighting algorithm in the shaders used to render the instance. Point in the pipeline lighting occurs: The vertex shader The pixel shader

  22. Graphics Lighting Representation of the lighting environment The game needs a representation of the lighting environment to be able to answer the question “What lights are shining on the surface?” for each mesh it renders. The standard, physically based solution is to regard all lights in a scene as infinitely small points (point lights) giving off a certain number of photons per second, of various wavelengths. The light has an intensity or brightness, a position, and a color Amount of incident light on the surface from single light Representing multiple lights Represent an entire environment of lights at once Approach 1: Store multiple lights in a list and process them individually, adding up their contributions Approach 2: The major lights that contribute to most of the lighting are processed individually at high quality individually, and the rest are stored in some other less-precise way and processed as a whole. Indoor scene: Ambient light (constant to be added to all calculations of incident lighting ) Outdoor scene: Hemisphere light (blue sky)

  23. Graphics Lighting Diffuse lighting for lighting interaction It models how light is absorbed by the material and then reemitted as new, changed photons equally in all directions from the surface. Lambert lighting: Normal map: a texture map that holds surface normal vectors, where the R,G,B channels of the texture are reinterpreted as arbitrary data rather than a color, that is, as the x, y, z values of the surface normal vector. Specular lighting for lighting interaction Photons in specular lighting bounce off the surface, whose exit direction is closely related to its incident direction. For a light from a particular direction, more photons will bounce off the surface in some directions than others will. The rendering engine is interested in only one direction—the one the eye is in. Blinnspecular lighting: Treat the surface as a collection of miniture conceptual microfacets, each facing in a random direction relative to the visible surface.

  24. Graphics The Hardware-Rendering Pipeline Input assembly The renderer feeds many streams of data to the hardware—buffers of vertices, indices, values to be fed to the shaders in other ways, textures, and various other control data. The data is read, de-indexed as necessary, and assembled into primitives. Vertex shading The vertex shader’s input data is read from the various buffers it lives in and fed to an instance of the vertex shader program. The vertex shader transforms the vertex’s local-space position to clip space using a matrix transformation. The vertex shader then calculates any data required by the pixel shader for its shading. Primitive assembly, culling, and clipping Triangles have two sides—a clockwise side (the front) and a counterclockwise side (the back). Backfaced culling: If the game has specified that only one side is to be visible, and that side is not facing the camera, the triangle is discarded and not rendered . Frustum culling: The three vertices of the triangle are tested against the six planes of clip space, and if all three are off one of the planes, the triangle is quickly rejected. After this, the triangle is clipped to those six planes to remove the invisible parts.

  25. Graphics The Hardware-Rendering Pipeline Projection, rasterization and antialiasing Triangles are projected from clip space to screen space (where pixels live) by dividing the clip-space x, y, and z values by the w value, some simple scale-and-biasing to fit the result to the screen rendering window (called the viewport), and rasterized. Rasterization is the process of finding which pixels and samples (in back buffer) the triangle hits. Samples and anti-aliasing Each displayed pixel on the frame buffer has many “samples” in the back buffer. Each sample has a separate color and Z value, The back buffer behaves much like a frame buffer with a much higher resolution. When the back buffer has finished rendering and is displayed, the multiple samples are combined (often using a complex filtering process) to create a single pixel. This allows triangle edges to be smoothed and appear less jagged Pixel shading The various attributes calculated by the vertex shaders at each vertex are interpolated across the triangle, and a single value is given to each invocation of the pixel shader. Each pixel shader outputs one or more color values as its result, and may optionally also output a new depth value for the pixel.

  26. Graphics The Hardware-Rendering Pipeline Z, Stencil, and Alpha-blend operations New sample generated by pixel shader will be accepted or rejected by comparison with the corresponding values in the depth buffer and stencil buffer. Actions with the accepted sample: Consider if replacing the existing depth value by the new one. Consider if changing the exiting stencil value. Blend the new sample’s color with the existing one in the back buffer on per channel, especially Alpha channel.

More Related