1 / 62

CIS 736 Computer Graphics Lecture 24 of 42 Exam Review and Hardware Shading Monday, 13 March 2006

CIS 736 Computer Graphics Lecture 24 of 42 Exam Review and Hardware Shading Monday, 13 March 2006 Reading: Hardware Rendering (shader_lecture) Adapted with permission from slides by Andy van Dam and Kevin Egan W. H. Hsu http://www.kddresearch.org. avd November 4, 2003 VSD 1/46. Texturing

wattan
Download Presentation

CIS 736 Computer Graphics Lecture 24 of 42 Exam Review and Hardware Shading Monday, 13 March 2006

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CIS 736 Computer Graphics Lecture 24 of 42 Exam Review and Hardware Shading Monday, 13 March 2006 Reading: Hardware Rendering (shader_lecture) Adapted with permission from slides by Andy van Dam and Kevin Egan W. H. Hsu http://www.kddresearch.org avd November 4, 2003 VSD 1/46

  2. Texturing • Nothing is more important than texture performance and quality. Textures are used for absolutely everything. • Fake shading • Fake detail • Fake effects • Fake geometry • Geometry is expensive – you gotta store it, transform it, light it, clip it… bah! • Use them in ways they aren’t supposed to be used • An image is just an array after all • If it weren’t for textures, we’d be stuck with big Gouraud shaded polys! • Quick hardware texture review • Interpolation is linear in 1/z

  3. Multipass Rendering • In 123, everything we’ve done has been in one pass but in reality you won’t get anywhere with that. • Multipass rendering gives you flexibility and better realism • An early version of Quake 3 did this: (1-4: Accumulate bump map) 5: Diffuse lighting 6: Base texture (7: Specular lighting) (8: Emissive lighting) (9: Volumetric effects) (10: Screen flashes) • Multitexturing is the most important part of multipass rendering (remember all of those texture regs?)

  4. Billboards • A billboard is a flat object that faces something • There are lots of different billboarding methods, but we’ll stick with the easiest, most used one • Take a quad and slap a texture on it. Now we want it to face the camera. How do we do that? (Hint: you just did it in modeler) • Bread and butter of older 3d games and still used extensively today • Monsters (think Doom) • Items • Impostors (LOD) • Text • HUDs (sometimes) • Faked smoke, fire, explosions, particle effects, halos, etc. • #*$&ing lens flares • Bad news: Little to no shading

  5. Aliasing when scaling up • Bilinear Filtering (a.k.a. Bilinear Interpolation) • Interpolate horizontally by the decimal part of u and vertically interpolate the horizontal components by the decimal part of v • x = floor(u) a = u - x • y = floor(v) b = v – y • T(u,v) = (1 – a)[(1 – b)T(x, y) + bT(x, y + 1)] • + a[(1 – b)T(x + 1, y) + bT(x + 1, y + 1)] • = (1 – a)(1 – b)T(x, y) + a(1 – b)T(x + 1, y) • + (1 – a)bT(x, y + 1) + abT(x + 1, y + 1) • This is essentially what you did in filter when scaling up • Hardware can do this almost for free and I can’t think of a card that doesn’t do it by default • Not so free in a software engine

  6. Mipmapping • Mip = multum in parvo (many in a small place) • Solves the aliasing problem when scaling down • It’s possible for more than one texel may cover the area of a pixel (edges of objects, objects in the distance…). We could find all texels that fall under that pixel and blend them, but that’s too much work • This problem causes temporal aliasing • Will bilinear filtering help? Will it solve the problem? • Solution: more samples per pixel or lower the frequency of the texture • Mipmapping solves the problem by taking the latter approach • Doing this in real time is too much work so we’ll precompute • Take the original texture and reduce the area by 0.25 until we reach a 1 x 1 texture • Use a good filter and gamma correction when scaling • If we use a Gaussian filter, this is called a Gaussian pyramid • “Predict” how bad the aliasing is to determine which mipmap level to use • How much more memory are we using? • Can potentially increase texture performance (Lars story) • Cards do mipmapping and bilinear filtering by default. A good deal of console games don’t do mipmapping, why?

  7. Problem Solved…

  8. We’re good. A little too good. • We got rid of aliasing, but now everything is too blurry! • Let’s take more samples. Take a sample from the mipmap level above and below the current one and do bilinear filtering on the current mipmap level => Trilinear Filtering • Trilinear filtering makes it look a little better but we’re still missing something… If we’re going to take even more samples we better be taking them correctly. • Key observation: suppose we take a pixel and backwards map it onto a texture. Is the pixel always a nice little square* with sides parallel to the texture walls? NO! • Bilinear and trilinear filtering are isotropic because they sample the same distance in all directions. • Now we’re going to sample more where it is actually needed • * Of course, a pixel is NOT a tiny little square. But let’s suppose it is…

  9. Anisotropic Filtering • Anisotropic = not isotropic (surprise). Also called aniso or AF for short. • There are a couple of aniso algorithms that don’t use mipmapping but our cards already do mipmapping really well so we’ll build off of that. • When the pixel is backwards mapped, the longest side of the quad determines the line of anisotropy and we’ll take a hojillion samples along that line across mipmaps. • Aniso and FSAA are the two big features of today’s modern cards • ATI and NVIDIA have different algorithms that they guard secretively and continue to improve/screw up • We could be taking up to 128 samples per pixel! This takes serious bandwidth. This is orders of magnitude more costly than bilinear (4) or trilinear (6) filtering. • Pictures!

  10. Aniso Rules (1/3) • www.richleader.com

  11. Aniso Rules (2/3) • Serious Sam extremetech.com

  12. Aniso Rules (3/3) • Serious Sam extremetech.com

  13. Texture Generation • Who needs artists? • Procedural Texturing • Use a random procedure to generate the colors • Perlin noise (not just for color) • Good for wood, marble, water, fire… • Unreal Tournament did it quite a bit • Texture Synthesis • No games use this to my knowledge • Efros & Leung, Ashikhmin use neighborhood statistics • Cohen (Siggraph 2003) has a much faster tile based method

  14. Summary Points • Solid Modeling: Overview • Data structures • Boundary representations (B-reps): last time • Spatial partitioning representations: today • Algorithms • Construction (composition) • Intersection, point classification • Know: difference between B-reps and spatial partitioning; pros and cons • Spatial Partitioning (Review Guide) • Cell decomposition – know how to obtain for composite object (simple primitives) • Planar and spatial occupancy • Simple: uniform subdivision (grid / pixel, volumetric / voxel) • Hierarchical: quadtrees and octrees – know how to obtain for 2D, 3D scenes • Binary Space Partitioning (BSP) trees – know how to obtain for simple 2D object • Constructive Solid Geometry (CSG) – know typical primitives, how to combine • Next Class: Color Models; Visible Surface Data Structures

  15. Programmable Hardware Doom III Research Halo 2 Jet Set Radio Future

  16. History • 1992 - id’s Wolfenstein 3D video game rocks gaming world, all objects are billboards (flat planes) and rendered in software • 1996 - id’s Quake introduces a full 3D polygonal game, lighting vertices and shading pixels is still done in software • 1996 - Voodoo 3Dfx graphics card released, does shading operations (such as texturing) in hardware. QuakeWorld brings hardware acceleration to Quake • 1999 - Geforce 256 graphics card released, now transform and lighting (T&L) of vertices is done in hardware as well (uses the fixed function pipeline) • 2001 – Geforce 3 graphics card lets programmers download assembly programs to control vertex lighting and pixel shading keeping the speed of the fixed function pipeline with none of the restrictions • Future – Expanded features and high level API’s for vertex and pixel shaders, increased use of lighting effects such as bump mapping and shadowing, higher resolution color values. Doom III and Half-Life 2 usher in a new era of realism

  17. Fixed Function Pipeline • Starting in 1999 some graphics cards began to do the standard lighting model and transformations in hardware (T&L). CPUs everywhere sighed in relief. • Hardware T&L existed in the 60s and 70s, it was just really slow and really expensive. • Implementing the pipeline in hardware made processing polygons much faster, but the developer could not modify the pipeline (hence “fixed function pipeline”). The fixed function pipeline dates back to the first SGI workstations. • New programmable hardware allows programmers to write vertex and pixel programs to change the pipeline • Vertex and pixel programs aren’t necessarily slower than the fixed function alternative • Note that the common term “vertex shader” to describe a vertex program is misleading: vertices are lit and pixels are shaded

  18. A Quick Review • By default, GL will do the following: • Take as input various per-vertex quantities (color, light source, eye point, texture coordinates, etc.) • Calculate a final color for each vertex using a basic lighting model (OpenGL uses Phong lighting) • For each pixel, linearly interpolate the three surrounding vertex colors to shade the pixel (OpenGL uses Gouraud shading) • Write the pixel color value to the frame buffer

  19. Vertices 1 unlit model space vertex Standard T&L Vertex Program 1 lit clip space vertex 1 un-colored pixel 1 colored pixel Backface Culling Frustum Clipping Standard Shading Pixel Shader Depth Test Store Pixel Programmable Hardware Pipeline • clip space refers to the space of the canonical view volume • New graphics cards can use either the fixed function pipeline or vertex/pixel programs

  20. normal vector light vector Example: Cartoon Shading • Cartoon shading is a cheap and neat looking effect used in video games such as Jet Set Radio Future • Instead of using traditional methods to light a vertex, use the dot product of the light vector and the normal of the vertex to index into a 1 dimensional “texture” (A texture is simply a lookup function for colors – nothing more and nothing less) • Instead of a smooth transition from low intensity light (small dot product) to high intensity light (large dot product) make the 1 dimensional texture have sharp transitions • Textures aren’t just for “wrapping” 2D images on 3D geometry! • Viola! Cartoon Teapot light 0.0 dot product 1.0 1 dimensional texture

  21. What is Cg? • Cg is a C-like language that the graphics card compiles in to a program • The program is run once per-vertex and/or per-pixel on the graphics card • Cg does not have all the functionality of C • Different types systems • Can’t include standard system headers • No malloc • http://www.cgshaders.org/articles/ has the technical documentation for Cg • Cg is actually an abstraction of the more primitive assembly language that the programmable hardware originally supported

  22. Cg Tips • Understand the different spaces your vertices may exist in • model space: the space in which your input vertex positions exist, in this space the center of the model is at the origin • world space: the space in which you will do most of your calculations • clip space: the space in which your output vertex positions must exist, this space represents the canonical view volume • If you want a vector to have length 1 make sure to normalize the vector, this often happens when you want to use a vector to represent a direction • When writing a Cg program try to go one step at a time, one sequence of steps might be • Make sure the model vertex positions are being calculated correctly • Set the color or texture coordinates to an arbitrary value, verify that you are changing the surface color • Calculate the color or texture coordinates correctly • Check out http://cgshaders.org/articles/ for some helpful documents

  23. The Big Picture • Write a .cg file. This will invariably take some sort of information as a parameter to its “main()” function • Note that this main() is not compiled by gcc (or any C/C++ compiler). That would generate a symbol conflict, among other things. It is only processed by NVidia’s Cg compiler • Write a class that extends CGEffect. This is cs123’s object-oriented wrapper around the basic C interface provided by NVidia • The CGEffect subclass allows you to bind data from your .C files to variables in your .cg vertex program • Make that CGEffect the IScene’s current CGEffect by calling IScene::setCGEffect(). IScene will take ownership of the CGEffect* at this point, so you will not be deleting the memory you allocated yourself. Rendering will now be done using your vertex shader • Call IScene::removeCGEffect() if you want to turn vertex shaders off again

  24. Cg Example Code (1/2) • #pragma bind appin.Position = ATTR0 • #pragma bind appin.Normal = ATTR2 • #pragma bind appin.Col0 = ATTR3 • // define inputs from application • struct appin : application2vertex • { • float4 Position; • float4 Normal; • float4 Col0; • }; • #pragma bind vertout.HPosition = HPOS • #pragma bind vertout.Col0 = COL0 • // define outputs from vertex shader • struct vertout : vertex2fragment • { • float4 HPosition; • float4 Col0; • }; • // (continued on next slide)

  25. Cg Example Code (2/2) • vertout main(appin IN, • uniform float4 lightpos, • uniform float4x4 ModelViewInvTrans, • uniform float4x4 ModelView, • uniform float4x4 ModelViewProj, • uniform float4x4 Projection) • { • vertout OUT; • OUT.HPosition = mul(ModelViewProj, IN.Position); • float4 wsnorm = mul(ModelViewInvTrans, IN.Normal); • wsnorm.w = 0; • wsnorm = normalize(wsnorm); • float4 worldpoint = mul(ModelView, IN.Position); • float4 lightvec = lightpos - worldpoint; • lightvec.w = 0; • lightvec = normalize(lightvec); • float dp = dot(wsnorm, lightvec); • dp = clamp(dp, 0.0, 1.0); • OUT.Col0 = IN.Col0 * dp; • return OUT; • }

  26. Cg Explanation (1/6) Declare input struct and bindings • #pragma bind appin.Position = ATTR0 • #pragma bind appin.Normal = ATTR2 • #pragma bind appin.Col0 = ATTR3 • // define inputs from application • struct appin : application2vertex • { • float4 Position; • float4 Normal; • float4 Col0; • }; • The appin struct “extends” application2vertex indicating to Cg that appin will be used to hold per-vertex input. The name “appin” is arbitrary, but the name “application2vertex” is part of Cg • The “#pragma” statements establish the mapping between OpenGL’s representation for vertex input and the members of appin • “#pragma bind” statements are kind of confusing. Vertex inputs are supplied by the OpenGL program and are then stored in registers on the graphics card. These statements tell Cg how to initialize each member of the input struct: i.e. “use the value stored in the register specified by the #pragma binding”

  27. Cg Explanation (2/6) Declare output struct and bindings • #pragma bind vertout.HPosition = HPOS • #pragma bind vertout.Col0 = COL0 • // define outputs from vertex shader • struct vertout : vertex2fragment • { • float4 HPosition; • float4 Col0; • }; • The vertout struct “extends” vertex2fragment indicating to Cg that vertout will be used to return per-vertex output. The name “vertout” is arbitrary, but the name “vertex2fragment” is part of Cg • The “#pragma” statements establish the mapping between the members of vertout and OpenGL’s representation for vertex output • Similarly to inputs, the graphics card expects the vertex outputs to be stored in registers. These #pragma bind statements tell Cg what to do with the values stored in members of the output struct returned from main: put them in the register specified by the #pragma bind • The card then uses the values in these registers in the rest of the pipeline

  28. Cg Explanation (3/6) Entry point to the Cg program • vertout main(appin IN, • uniform float4 lightpos, • uniform float4x4 ModelViewInvTrans, • uniform float4x4 ModelView, • uniform float4x4 ModelViewProj • uniform float4x4 Projection) • { • Cg requires a main() function in every vertex program and uses this function as the entry point • The return type “vertout” indicates we must return a structure of type vertout which will hold per-vertex output • The IN parameter is of type appin; Cg uses the “#pragma” bindings from the previous slide to initialize “IN” with per-vertex input before it is passed to main(). This is read-only • The “uniform” keyword indicates to Cg that the specified input parameter is constant across all vertices in the current glBegin()/glEnd() block and is supplied by the application • The ModelView matrix maps from object space to world space • The ModelViewProj matrix maps from object space to the film plane • The ModelViewInvTrans is the inverse transpose of the modelview matrix • Used to move normals from object space to world space • The Projection matrix maps from world space to film plane

  29. Cg Explanation (4/6) • vertout OUT; • OUT.HPosition = mul(ModelViewProj, IN.Position); Create output vertex; compute and set its clip space position • The first thing we do is declare a struct “OUT” of type “vertout” which we will use to return per-vertex output. This is a write-only variable • We calculate the vertex’s clip space position by multiplying the model space position by the composite modelview and projection matrix Compute and normalize world space normal • float4 wsnorm = mul(ModelViewInvTrans, IN.Normal); • wsnorm.w = 0; • wsnorm = normalize(wsnorm); • We calculate the world space normal by multiplying the model space normal by the inverse transpose of the modelview matrix • We set w equal to 0 for the world space normal since all vectors should have 0 as a homogenous coordinate. Do Not assume that Cg will do this sort of thing for you – it’s not IAlgebra • We normalize the world space normal to assure that it is of length 1

  30. Cg Explanation (5/6) • float4 worldpoint = mul(ModelView, IN.Position); • float4 lightvec = lightpos - worldpoint; • lightvec.w = 0; • lightvec = normalize(lightvec); Compute vertex world space position Compute and normalize vector from vertex to light (in world space) • We calculate the vertex’s world space position by multiplying its model space position by the modelview matrix (we previously calculated the vertex’s clip space position) • Since the lightpos constant used in this example is already in world space coordinates we calculate the vector from the vertex to the light by subtracting the vertex’s position from the light’s position • Again, to normalize the light vector we set the homogenous coordinate to 0 and call normalize()

  31. Cg Explanation (6/6) • float dp = dot(wsnorm, lightvec); • dp = clamp(dp, 0.0, 1.0); Compute and clamp dot product (used in lighting calculation) • To calculate the intensity associated with the incoming light we dot the world space normal with the world space light vector • So that we do not have to worry about negative dot product values we clamp the dot product to be between 0.0 and 1.0. Note that we don’t use a conditional here. You should almost never have a branch instruction in one of your vertex shaders. Set output color; return output vertex • OUT.Col0 = IN.Col0 * dp; • return OUT; • To calculate the diffuse contribution of the light source we scale the diffuse color of the object by the dot product • We have set both the clip space position and color in the OUT structure so we now return the OUT structure from main()

  32. How Can I Set The Parameters? • We have two different “address spaces” • You have parameters to your main() function in a .cg file • You have floats and pointers to floats in a C/C++ file • We provide support code to help bind the two together. Our wrappers also make this all a bit more object-oriented • Look at the documentation for CGEffect.H/C • There are bindings for the actual vertex programs and for the individual parameters sent to the vertex program • The support code handles the ModelView/Projection/etc matrices automatically • Lets take a look at a .C file:

  33. The .C File (1/2) • #include "CGDiffuse.H" • CGDiffuse::CGDiffuse(CGcontext context, • const char* strCgFileName, • const char* strModelViewName, • const char* strModelViewProjName, • const char* strProjectionName, • const char* strMVInvTransName, • const double_t lightPosX, • const double_t lightPosY, • const double_t lightPosZ) : • CGEffect(context, strCgFileName, strModelViewName, • strModelViewProjName, strProjectionName, strMVInvTransName) • { • m_lightPos[0] = lightPosX; • m_lightPos[1] = lightPosY; • m_lightPos[2] = lightPosZ; • m_cgLightPosParam = NULL; • }

  34. The .C File (2/2) • void CGDiffuse::initializeStudentCgBindings() • { • m_cgLightPosParam = cgGetNamedParameter(m_cgProgramHandle, "lightPos"); • assert(cgIsParameter(m_cgLightPosParam)); • } • void CGDiffuse::bindStudentUniformCgParameters() • { • if (cgIsParameter(m_cgLightPosParam)) • { • cgGLSetParameter4f(m_cgLightPosParam, • m_lightPos[0], • m_lightPos[1], • m_lightPos[2], • 1); • } • }

  35. The .C File Explained (1/3) Initialize the effect • #include "CGDiffuse.H" • CGDiffuse::CGDiffuse(CGcontext context, • const char* strCGFileName, • const char* strModelViewName, • const char* strModelViewProjName, • const char* strProjectionName, • const char* strMVInvTransName • const double_t lightPosX, • const double_t lightPosY, • const double_t lightPosZ) : • CGEffect(context, strCGFileName, strModelViewName, strModelViewProjName, strProjectionName, strMVInvTransName) • { • // this stuff shouldn’t need explanation, so it is elided • } • Initializing the effect simply involves calling the superclass constructor, passing it: • The CGcontext, which IScene stores as the protected variable m_cgContext • strCGFileName: the .cg file with the Cg code for this effect • The name of the modelview, composite modelview projection, projection, and modelview inverse transpose matrices. • These names should be the names of our parameters in the main function of the .cg file, i.e. “ModelViewInvTrans”, “ModelView”, “ModelViewProj”, and “Projection”

  36. The .C File Explained (2/3) • CGDiffuse:: initializeStudentCgBindings() • { • m_cgLightPosParam = cgGetNamedParameter(m_cgProgramHandle, “lightPos”); • assert(cgIsParameter(m_cgLightPosParam); • } Initializing bindings • This function is called when the effect is created to initialize your bindings • cgGetNamedParameter takes a CGprogram and a string • The first parameter is a “handle” to the text of the corresponding Cg program for this effect • The CGDiffuse class inherits m_cgProgramHandle from CGEffect: this protected variable is used in most of the Cg calls • The second variable “lightPos” is a string with the form: <uniform variable name> • The uniform variable is in the .cg file, not this .C file! • It returns a CGparameter • This binding will be used later on to set a value for the uniform variable “lightPos” • We’ll see how to do this on the next slide • Initializing a binding does not give it a value!

  37. The .C File Explained (3/3) • void • CGDiffuse::bindStudentUniformCgParameters() { • if(cgIsParameter(m_cgLightPosParam)) • { • cgGLSetParameters4f(m_cgLightPosParam, • m_lightPos[0], • m_lightPos[1], • m_lightPos[2], • 1); • } • } Assigning values to a binding • This function is called to give actual values to a binding • It is called exactly once by the support code with each call you make to redraw() • Here, our binding represents the position of the light in our scene • cgGLSetParameters4f takes the variable in our .C file representing the binding, and four floats • The binding we’re specifying must be to a variable of type float4. In this case we are binding to “lightPos”, which is a float4 in our cg program. • The variable’s fields are initialized to the four floats we specify • Essentially, this function determines actual parameters for uniform variables in the .cg file the next time the Cg program is run

  38. Let’s Code! • As a class let’s reconstruct the shader we just saw and add specular lighting. • Then let’s work out what needs to change in the .C file • Fun!

  39. Revised Cg Code • // the stuff at the top of the file is unchanged in this case. Not so if we • // were using textures, etc, etc. • float4 reflect(float4 incoming, float4 normal) • { • float4 temp = 2 * dot(normal, incoming) * normal; • return (temp – incoming); • } • vertout main(appin IN, • uniform float4 eye, • uniform float4 lightPos, • uniform float4x4 ModelViewInvTrans, • uniform float4x4 ModelView, • uniform float4x4 ModelViewProj, • uniform float4x4 Projection) • { • // same… • float4 reflectedlight = reflect(lightvec, wsnorm); • reflectedlight.w = 0; • reflectedlight = normalize(reflectedlight); • float4 toeyevec = eye – worldpoint; • toeyevec.w = 0; • toeyevec = normalize(toeyevec); • float specval = pow(dot(reflectedlight, toeyevec), 6.0); • // Assume the specular color is white • // Cg will clamp OUT.Col0 to be <= 1.0 for each channel • OUT.Col0 = (IN.Col0 * dp) + (float4(1, 1, 1, 1) * specval); • return OUT; • }

  40. Revised .C File (1/2) • void • CGDiffuse:: initializeStudentCgBindings() • { • m_cgLightPosParam = cgGetNamedParameter(m_cgProgramHandle, “lightPos”); • assert(cgIsParameter(m_cgLightPosParam)); • m_cgEyePointParam = cgGetNamedParameter(m_cgProgramHandle, “eye”); • assert(cgIsParameter(m_cgEyePointParam)); • }

  41. Revised .C File (2/2) • void • CGDiffuse::bindStudentUniformCgParameters() • { • if(cgIsParameter(m_cgLightPosParam)) • { • cgGLSetParameters4f(m_cgLightPosParam, • m_lightPos[0], • m_lightPos[1], • m_lightPos[2], • 1); • } • if (cgIsParameter(m_cgEyePointParam )) • { • const IAPoint &eyept = m_camera->eyePoint(); • cgGLSetParameter4f(m_cgEyePointParam , eyept[0],eyept[1],eyept[2], 1); • } • }

More Related