1 / 40

Lecture 30 of 42 Raytracing Part 3 of 3 and Hardware Shaders Monday, 03 April 2006

Lecture 30 of 42 Raytracing Part 3 of 3 and Hardware Shaders Monday, 03 April 2006 Adapted from Brown University CS 123 (A. van Dam). Programmable Hardware. Doom III. Research. Halo 2. Jet Set Radio Future. History.

trixie
Download Presentation

Lecture 30 of 42 Raytracing Part 3 of 3 and Hardware Shaders Monday, 03 April 2006

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 30 of 42 Raytracing Part 3 of 3 and Hardware Shaders Monday, 03 April 2006 Adapted from Brown University CS 123 (A. van Dam) CS123 TA Staff

  2. Programmable Hardware Doom III Research Halo 2 Jet Set Radio Future CS123 TA Staff

  3. History • 1992 - id’s Wolfenstein 3D video game rocks gaming world, all objects are billboards (flat planes) and rendered in software • 1996 - id’s Quake introduces a full 3D polygonal game, lighting vertices and shading pixels is still done in software • 1996 - Voodoo 3Dfx graphics card released, does shading operations (such as texturing) in hardware. QuakeWorld brings hardware acceleration to Quake • 1999 - Geforce 256 graphics card released, now transform and lighting (T&L) of vertices is done in hardware as well (uses the fixed function pipeline) • 2001 – Geforce 3 graphics card lets programmers download assembly programs to control vertex lighting and pixel shading keeping the speed of the fixed function pipeline with none of the restrictions • Future – Expanded features and high level API’s for vertex and pixel shaders, increased use of lighting effects such as bump mapping and shadowing, higher resolution color values. Doom III and Half-Life 2 usher in a new era of realism CS123 TA Staff

  4. Fixed Function Pipeline • Starting in 1999 some graphics cards began to do the standard lighting model and transformations in hardware (T&L). CPUs everywhere sighed in relief. • Hardware T&L existed in the 60s and 70s, it was just really slow and really expensive. • Implementing the pipeline in hardware made processing polygons much faster, but the developer could not modify the pipeline (hence “fixed function pipeline”). The fixed function pipeline dates back to the first SGI workstations. • New programmable hardware allows programmers to write vertex and pixel programs to change the pipeline • Vertex and pixel programs aren’t necessarily slower than the fixed function alternative • Note that the common term “vertex shader” to describe a vertex program is misleading: vertices are lit and pixels are shaded CS123 TA Staff

  5. A Quick Review By default, GL will do the following: • Take as input various per-vertex quantities (color, light source, eye point, texture coordinates, etc.) • Calculate a final color for each vertex using a basic lighting model (OpenGL uses Phong lighting) • For each pixel, linearly interpolate the three surrounding vertex colors to shade the pixel (OpenGL uses Gouraud shading) • Write the pixel color value to the frame buffer CS123 TA Staff

  6. Vertices 1 unlit model space vertex Standard T&L Vertex Program 1 lit clip space vertex 1 un-colored pixel 1 colored pixel Backface Culling Frustum Clipping Standard Shading Pixel Shader Depth Test Store Pixel Programmable Hardware Pipeline clip space refers to the space of the canonical view volume New graphics cards can use either the fixed function pipeline or vertex/pixel programs CS123 TA Staff

  7. normal vector light vector Example: Cartoon Shading • Cartoon shading is a cheap and neat looking effect used in video games such as Jet Set Radio Future • Instead of using traditional methods to light a vertex, use the dot product of the light vector and the normal of the vertex to index into a 1 dimensional “texture” (A texture is simply a lookup function for colors – nothing more and nothing less) • Instead of a smooth transition from low intensity light (small dot product) to high intensity light (large dot product) make the 1 dimensional texture have sharp transitions • Textures aren’t just for “wrapping” 2D images on 3D geometry! • Viola! Cartoon Teapot light 0.0 dot product 1.0 1 dimensional texture CS123 TA Staff

  8. What is Cg? • Cg is a C-like language that the graphics card compiles in to a program • The program is run once per-vertex and/or per-pixel on the graphics card • Cg does not have all the functionality of C • Different types systems • Can’t include standard system headers • No malloc • http://www.cgshaders.org/articles/ has the technical documentation for Cg • Cg is actually an abstraction of the more primitive assembly language that the programmable hardware originally supported CS123 TA Staff

  9. Cg Tips • Understand the different spaces your vertices may exist in • model space: the space in which your input vertex positions exist, in this space the center of the model is at the origin • world space: the space in which you will do most of your calculations • clip space: the space in which your output vertex positions must exist, this space represents the canonical view volume • If you want a vector to have length 1 make sure to normalize the vector, this often happens when you want to use a vector to represent a direction • When writing a Cg program try to go one step at a time, one sequence of steps might be • Make sure the model vertex positions are being calculated correctly • Set the color or texture coordinates to an arbitrary value, verify that you are changing the surface color • Calculate the color or texture coordinates correctly • Check out http://cgshaders.org/articles/ for some helpful documents CS123 TA Staff

  10. The Big Picture • Write a .cg file. This will invariably take some sort of information as a parameter to its “main()” function • Note that this main() is not compiled by gcc (or any C/C++ compiler). That would generate a symbol conflict, among other things. It is only processed by NVidia’s Cg compiler • Write a class that extends CGEffect. This is cs123’s object-oriented wrapper around the basic C interface provided by NVidia • The CGEffect subclass allows you to bind data from your .C files to variables in your .cg vertex program • Make that CGEffect the IScene’s current CGEffect by calling IScene::setCGEffect(). IScene will take ownership of the CGEffect* at this point, so you will not be deleting the memory you allocated yourself. Rendering will now be done using your vertex shader • Call IScene::removeCGEffect() if you want to turn vertex shaders off again CS123 TA Staff

  11. Cg Example Code (1/2) #pragma bind appin.Position = ATTR0 #pragma bind appin.Normal = ATTR2 #pragma bind appin.Col0 = ATTR3 // define inputs from application struct appin : application2vertex { float4 Position; float4 Normal; float4 Col0; }; #pragma bind vertout.HPosition = HPOS #pragma bind vertout.Col0 = COL0 // define outputs from vertex shader struct vertout : vertex2fragment { float4 HPosition; float4 Col0; }; // (continued on next slide) CS123 TA Staff

  12. Cg Example Code (2/2) vertout main(appin IN, uniform float4 lightpos, uniform float4x4 ModelViewInvTrans, uniform float4x4 ModelView, uniform float4x4 ModelViewProj, uniform float4x4 Projection) { vertout OUT; OUT.HPosition = mul(ModelViewProj, IN.Position); float4 wsnorm = mul(ModelViewInvTrans, IN.Normal); wsnorm.w = 0; wsnorm = normalize(wsnorm); float4 worldpoint = mul(ModelView, IN.Position); float4 lightvec = lightpos - worldpoint; lightvec.w = 0; lightvec = normalize(lightvec); float dp = dot(wsnorm, lightvec); dp = clamp(dp, 0.0, 1.0); OUT.Col0 = IN.Col0 * dp; return OUT; } CS123 TA Staff

  13. Cg Explanation (1/6) Declare input struct and bindings #pragma bind appin.Position = ATTR0 #pragma bind appin.Normal = ATTR2 #pragma bind appin.Col0 = ATTR3 // define inputs from application struct appin : application2vertex { float4 Position; float4 Normal; float4 Col0; }; • The appin struct “extends” application2vertex indicating to Cg that appin will be used to hold per-vertex input. The name “appin” is arbitrary, but the name “application2vertex” is part of Cg • The “#pragma” statements establish the mapping between OpenGL’s representation for vertex input and the members of appin • “#pragma bind” statements are kind of confusing. Vertex inputs are supplied by the OpenGL program and are then stored in registers on the graphics card. These statements tell Cg how to initialize each member of the input struct: i.e. “use the value stored in the register specified by the #pragma binding” CS123 TA Staff

  14. Cg Explanation (2/6) Declare output struct and bindings #pragma bind vertout.HPosition = HPOS #pragma bind vertout.Col0 = COL0 // define outputs from vertex shader struct vertout : vertex2fragment { float4 HPosition; float4 Col0; }; • The vertout struct “extends” vertex2fragment indicating to Cg that vertout will be used to return per-vertex output. The name “vertout” is arbitrary, but the name “vertex2fragment” is part of Cg • The “#pragma” statements establish the mapping between the members of vertout and OpenGL’s representation for vertex output • Similarly to inputs, the graphics card expects the vertex outputs to be stored in registers. These #pragma bind statements tell Cg what to do with the values stored in members of the output struct returned from main: put them in the register specified by the #pragma bind • The card then uses the values in these registers in the rest of the pipeline CS123 TA Staff

  15. Cg Explanation (3/6) Entry point to the Cg program vertout main(appin IN, uniform float4 lightpos, uniform float4x4 ModelViewInvTrans, uniform float4x4 ModelView, uniform float4x4 ModelViewProj uniform float4x4 Projection) { • Cg requires a main() function in every vertex program and uses this function as the entry point • The return type “vertout” indicates we must return a structure of type vertout which will hold per-vertex output • The IN parameter is of type appin; Cg uses the “#pragma” bindings from the previous slide to initialize “IN” with per-vertex input before it is passed to main(). This is read-only • The “uniform” keyword indicates to Cg that the specified input parameter is constant across all vertices in the current glBegin()/glEnd() block and is supplied by the application • The ModelView matrix maps from object space to world space • The ModelViewProj matrix maps from object space to the film plane • The ModelViewInvTrans is the inverse transpose of the modelview matrix • Used to move normals from object space to world space • The Projection matrix maps from world space to film plane CS123 TA Staff

  16. Cg Explanation (4/6) Create output vertex; compute and set its clip space position vertout OUT; OUT.HPosition = mul(ModelViewProj, IN.Position); • The first thing we do is declare a struct “OUT” of type “vertout” which we will use to return per-vertex output. This is a write-only variable • We calculate the vertex’s clip space position by multiplying the model space position by the composite modelview and projection matrix Compute and normalize world space normal float4 wsnorm = mul(ModelViewInvTrans, IN.Normal); wsnorm.w = 0; wsnorm = normalize(wsnorm); • We calculate the world space normal by multiplying the model space normal by the inverse transpose of the modelview matrix • We set w equal to 0 for the world space normal since all vectors should have 0 as a homogenous coordinate. Do Not assume that Cg will do this sort of thing for you – it’s not IAlgebra • We normalize the world space normal to assure that it is of length 1 CS123 TA Staff

  17. Cg Explanation (5/6) Compute vertex world space position Compute and normalize vector from vertex to light (in world space) float4 worldpoint = mul(ModelView, IN.Position); float4 lightvec = lightpos - worldpoint; lightvec.w = 0; lightvec = normalize(lightvec); • We calculate the vertex’s world space position by multiplying its model space position by the modelview matrix (we previously calculated the vertex’s clip space position) • Since the lightpos constant used in this example is already in world space coordinates we calculate the vector from the vertex to the light by subtracting the vertex’s position from the light’s position • Again, to normalize the light vector we set the homogenous coordinate to 0 and call normalize() CS123 TA Staff

  18. Cg Explanation (6/6) Compute and clamp dot product (used in lighting calculation) float dp = dot(wsnorm, lightvec); dp = clamp(dp, 0.0, 1.0); • To calculate the intensity associated with the incoming light we dot the world space normal with the world space light vector • So that we do not have to worry about negative dot product values we clamp the dot product to be between 0.0 and 1.0. Note that we don’t use a conditional here. You should almost never have a branch instruction in one of your vertex shaders. Set output color; return output vertex OUT.Col0 = IN.Col0 * dp; return OUT; • To calculate the diffuse contribution of the light source we scale the diffuse color of the object by the dot product • We have set both the clip space position and color in the OUT structure so we now return the OUT structure from main() CS123 TA Staff

  19. How Can I Set The Parameters? • We have two different “address spaces” • You have parameters to your main() function in a .cg file • You have floats and pointers to floats in a C/C++ file • We provide support code to help bind the two together. Our wrappers also make this all a bit more object-oriented • Look at the documentation for CGEffect.H/C • There are bindings for the actual vertex programs and for the individual parameters sent to the vertex program • The support code handles the ModelView/Projection/etc matrices automatically • Lets take a look at a .C file: CS123 TA Staff

  20. The .C File (1/2) #include "CGDiffuse.H" CGDiffuse::CGDiffuse(CGcontext context, const char* strCgFileName, const char* strModelViewName, const char* strModelViewProjName, const char* strProjectionName, const char* strMVInvTransName, const double_t lightPosX, const double_t lightPosY, const double_t lightPosZ) : CGEffect(context, strCgFileName, strModelViewName, strModelViewProjName, strProjectionName, strMVInvTransName) { m_lightPos[0] = lightPosX; m_lightPos[1] = lightPosY; m_lightPos[2] = lightPosZ; m_cgLightPosParam = NULL; } CS123 TA Staff

  21. The .C File (2/2) void CGDiffuse::initializeStudentCgBindings() { m_cgLightPosParam = cgGetNamedParameter(m_cgProgramHandle, "lightPos"); assert(cgIsParameter(m_cgLightPosParam)); } void CGDiffuse::bindStudentUniformCgParameters() { if (cgIsParameter(m_cgLightPosParam)) { cgGLSetParameter4f(m_cgLightPosParam, m_lightPos[0], m_lightPos[1], m_lightPos[2], 1); } } CS123 TA Staff

  22. The .C File Explained (1/3) Initialize the effect #include "CGDiffuse.H" CGDiffuse::CGDiffuse(CGcontext context, const char* strCGFileName, const char* strModelViewName, const char* strModelViewProjName, const char* strProjectionName, const char* strMVInvTransName const double_t lightPosX, const double_t lightPosY, const double_t lightPosZ) : CGEffect(context, strCGFileName, strModelViewName, strModelViewProjName, strProjectionName, strMVInvTransName) { // this stuff shouldn’t need explanation, so it is elided } • Initializing the effect simply involves calling the superclass constructor, passing it: • The CGcontext, which IScene stores as the protected variable m_cgContext • strCGFileName: the .cg file with the Cg code for this effect • The name of the modelview, composite modelview projection, projection, and modelview inverse transpose matrices. • These names should be the names of our parameters in the main function of the .cg file, i.e. “ModelViewInvTrans”, “ModelView”, “ModelViewProj”, and “Projection” CS123 TA Staff

  23. The .C File Explained (2/3) Initializing bindings CGDiffuse:: initializeStudentCgBindings() { m_cgLightPosParam = cgGetNamedParameter(m_cgProgramHandle, “lightPos”); assert(cgIsParameter(m_cgLightPosParam); } • This function is called when the effect is created to initialize your bindings • cgGetNamedParameter takes a CGprogram and a string • The first parameter is a “handle” to the text of the corresponding Cg program for this effect • The CGDiffuse class inherits m_cgProgramHandle from CGEffect: this protected variable is used in most of the Cg calls • The second variable “lightPos” is a string with the form: <uniform variable name> • The uniform variable is in the .cg file, not this .C file! • It returns a CGparameter • This binding will be used later on to set a value for the uniform variable “lightPos” • We’ll see how to do this on the next slide • Initializing a binding does not give it a value! CS123 TA Staff

  24. The .C File Explained (3/3) Assigning values to a binding void CGDiffuse::bindStudentUniformCgParameters() { if(cgIsParameter(m_cgLightPosParam)) { cgGLSetParameters4f(m_cgLightPosParam, m_lightPos[0], m_lightPos[1], m_lightPos[2], 1); } } • This function is called to give actual values to a binding • It is called exactly once by the support code with each call you make to redraw() • Here, our binding represents the position of the light in our scene • cgGLSetParameters4f takes the variable in our .C file representing the binding, and four floats • The binding we’re specifying must be to a variable of type float4. In this case we are binding to “lightPos”, which is a float4 in our cg program. • The variable’s fields are initialized to the four floats we specify • Essentially, this function determines actual parameters for uniform variables in the .cg file the next time the Cg program is run CS123 TA Staff

  25. As a class let’s reconstruct the shader we just saw and add specular lighting. Then let’s work out what needs to change in the .C file Fun! Let’s Code! CS123 TA Staff

  26. Revised Cg Code // the stuff at the top of the file is unchanged in this case. Not so if we // were using textures, etc, etc. float4 reflect(float4 incoming, float4 normal) { float4 temp = 2 * dot(normal, incoming) * normal; return (temp – incoming); } vertout main(appin IN, uniform float4 eye, uniform float4 lightPos, uniform float4x4 ModelViewInvTrans, uniform float4x4 ModelView, uniform float4x4 ModelViewProj, uniform float4x4 Projection) { // same… float4 reflectedlight = reflect(lightvec, wsnorm); reflectedlight.w = 0; reflectedlight = normalize(reflectedlight); float4 toeyevec = eye – worldpoint; toeyevec.w = 0; toeyevec = normalize(toeyevec); float specval = pow(dot(reflectedlight, toeyevec), 6.0); // Assume the specular color is white // Cg will clamp OUT.Col0 to be <= 1.0 for each channel OUT.Col0 = (IN.Col0 * dp) + (float4(1, 1, 1, 1) * specval); return OUT; } CS123 TA Staff

  27. Revised .C File (1/2) void CGDiffuse:: initializeStudentCgBindings() { m_cgLightPosParam = cgGetNamedParameter(m_cgProgramHandle, “lightPos”); assert(cgIsParameter(m_cgLightPosParam)); m_cgEyePointParam = cgGetNamedParameter(m_cgProgramHandle, “eye”); assert(cgIsParameter(m_cgEyePointParam)); } CS123 TA Staff

  28. Revised .C File (2/2) void CGDiffuse::bindStudentUniformCgParameters() { if(cgIsParameter(m_cgLightPosParam)) { cgGLSetParameters4f(m_cgLightPosParam, m_lightPos[0], m_lightPos[1], m_lightPos[2], 1); } if (cgIsParameter(m_cgEyePointParam )) { const IAPoint &eyept = m_camera->eyePoint(); cgGLSetParameter4f(m_cgEyePointParam , eyept[0],eyept[1],eyept[2], 1); } } CS123 TA Staff

  29. Changes Checklist • When we went from a diffuse shader to a specular shader, we did the following: • Wrote the new .cg file • Added uniform float4 eye to the main function • Determined the specular component and added it to the diffuse color when setting OUT.Col0 • Added m_cgEyePointParam member variable of type CGparameter to the .H file (.H file not shown) • Initialized the new binding in initializeStudentCgBindings using cgGetNamedParameter • Used the program handle inherited from CGEffect, m_cgProgramHandle • The string was “eye” because we wanted the binding to specify the parameter eye in the cg program. • Gave a value to the binding in bindStudentUniformCgParameters using cgGLSetParameter4f • We got the eye point from the camera and passed it as four floats • When our Cg program is run we know that eye will be float4( eyept[0], eyept[1], eyept[2], 1) CS123 TA Staff

  30. Debugging Cg • Debugging Cg can be hard • “Compile errors” happen at runtime, when the shader is loaded, and do not have any helpful information • All you get is: “The compile returned an error” • To get some useful feedback use the CG compiler: /course/cs123/bin/cgc –profile vp20 <filename> • No printf • The only “output” you have is the vertex you return, so you can use the output color to do primitive testing • Comment a lot! Treat it as if you were writing assembly code from cs31 CS123 TA Staff

  31. Cg Types (1/2) • Used in .C and .H files: • CGprogram (in example code: m_cgProgramHandle) • All of the NVidia Cg calls are global functions. We need this pointer to tell the NVidia Cg library which program we’re talking about • CGparameter (in example code: m_cgLightPosParam, m_cgEyePointParam) • We need to connect the values ([0,0,1], say) to a parameter (“lightpos”, for example). This variable represents that “connection” or binding. CS123 TA Staff

  32. Cg Types (2/2) • Used in .cg files: • float4 (in example code: eye, lightpos) • This is a 4-vector. (Think IAPoint) You can access the elements in different ways • “lightpos.x” or “lightpos[0]”, “lightpos.y” or “lightpos[1]” • float4x4 (in example code: ModelView, etc.) • This is a 4x4 matrix. (Think IAMatrix) You can do matrix multiplications in hardware with these • float (can be used within a function, but not as a parameter) • (Think… float) Unfortunately, you will probably try to pass one as a parameter and get one of the absolutely opaque Cg compile errors. Don’t try it! You can’t bind to a single float. • Do use them within the body of a Cg function definition CS123 TA Staff

  33. Where did Cg come from? (or, culture is good for you) CS123 TA Staff

  34. Before the GeForce 3, graphics programmers sent position, normal, color, transparency, and texture data to the card and it used the fixed function pipeline to render the vertices (the left side of the picture on slide 5). Sceneview used the fixed function pipeline to render. This meant the programmer had limited control over how the hardware created the final image. To do non-standard effects, like cartoon shading, required a lot of hackery. Programmers had to “trick” the card in to doing different effects or handle a lot of the effects in software The current generation of hardware, however, takes a different view of rendering. The programmer simply sends data to the card and then writes a program to interpret the data and create an image. Most programmers still send standard types of data like position, normal, color, and texture data since it often makes the most sense. Old vs. New CS123 TA Staff

  35. In the first generation of programmable cards, the programmer wrote short assembly language programs to create a final image. Vertex shader programs take as input per vertex information (object space position, object space normal, etc.) and per frame constants (perspective matrix, modeling matrix, light position, etc.). They produce some of the following outputs: clip space position, diffuse color, specular color, transparency, texture coordinates, and fog coordinates. Pixel shader programs take as input the outputs from the vertex shader program and texture maps. They produce a final color and transparency as output. They are often called fragment shaders. Pixel shaders are trickier so we don’t cover them. Take CS224 if you want to learn more! Pixel shaders > Vertex shaders Basics CS123 TA Staff

  36. A programmer would write the vertex and pixel shaders as simple text files. Then a program would load each of the shaders it intended to use. This sends the text file to the driver where it is compiled in to a binary representation and stored on the graphics card. Each rendering pass, the program would enable one vertex shader and/or one pixel shader. This tells the graphics card to use the them to render the objects instead of the fixed function pipeline. Finally, the program passes “data” to the card. It’s interpreted by the shaders and an image is produced! Disabling the shaders (or never enabling any) prompts the card to use the fixed function pipeline. Using a Shader CS123 TA Staff

  37. MUL R1, R1.x, R2; DP4 R1.x, R3, -R1; MUL o[COL0], v[3], R1.x; MOV R2.xyz, -c[1]; MOV R2.w, c[18].x; DP4 R1.x, R2, R2; RSQ R1.x, R1.x; MUL R4, R1.x, R2; MUL R1, c[0].yzxw, R4.zxyw; MAD R2, R4.yzxw, c[0].zxyw, -R1; DP4 R1.x, R2, R2; RSQ R1.x, R1.x; Not only to we need to write some CPU assembly to make our game run fast, now we have to write assembly for the graphics cards Different graphics cards support different versions of the vertex and pixel shader assembly languages Shader programs run at different speeds on different cards => different assembly for each card John Carmack, a man not afraid of assembly, believes that high level shader languages are critical for the future success of programmable hardware and he’s right Ack, assembly! CS123 TA Staff

  38. Microsoft’s HLSL New in DirectX 9 struct VS_OUTPUT{    float4 Pos : POSITION;    float3 Light : TEXCOORD0;    float3 Norm : TEXCOORD1;}; VS_OUTPUT VS(float4 Pos : POSITION, float3 Normal : NORMAL){    VS_OUTPUT Out = (VS_OUTPUT)0;     Out.Pos = mul(Pos, matWorldViewProj);     Out.Light = vecLightDir;     Out.Norm = normalize(mul(Normal, matWorld));     return Out;} float4 PS(float3 Light: TEXCOORD0, float3 Norm : TEXCOORD1) : COLOR{    float4 diffuse = {1.0, 0.0, 0.0, 1.0};    float4 ambient = {0.1, 0.0, 0.0, 1.0};     return ambient + diffuse * saturate(dot(Light, Norm));} NVIDIA’s Cg Geared towards OpenGL but it can work with DirectX and compile for DirectX specific assembly languages The OpenGL ARB’s SLang Will be in OpenGL 2.0 whenever that comes out… (HLSL code from gamasutra.com) Enter the High Level Languages CS123 TA Staff

  39. During the summer of 2002 nVidia released Cg. Cg, as we’ve seen, is a language specification for vertex and pixel shaders that looks a lot like C. It is useful because it’s more intuitive and easier to program in than the assembly language used prior to it’s release. For the modeler assignment you will take advantage of Cg to make a vertex program. Cg programs are still simple text files and processed by the graphics card, just like the assembly programs. The only difference is the language. We can’t use HLSL because we’re using Linux (duh) and SLang isn’t out yet => Cg! Cg CS123 TA Staff

  40. Pixel shaders, pixel shaders, and more pixel shaders Shader performance and power is increasing at an insane rate. Look at how far we’ve come in only two years! Real time ray tracing? Radiosity? BRDF/BSSRDF Real time RenderMan? Scientific computing Who needs the CPU? The Future CS123 TA Staff

More Related