Glsl basics
Download
1 / 63

GLSL Basics - PowerPoint PPT Presentation

GLSL Basics GLSL Basics I GPU evolutions What is a GPU The OpenGL Graphics Pipeline Requirements for GLSL II Code Shaders in GLSL OpenGL Shading language Vertex processor Fragment processor III Example using GLSL Code Visual I GPU evolution a) What’s a GPU ?

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha

Download Presentation

GLSL Basics

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


GLSL Basics


GLSL Basics

I GPU evolutions

  • What is a GPU

  • The OpenGL Graphics Pipeline

  • Requirements for GLSL

    II Code Shaders in GLSL

  • OpenGL Shading language

  • Vertex processor

  • Fragment processor

    III Example using GLSL

  • Code

  • Visual


I GPU evolution

a) What’s a GPU ?

GPU = Graphics Processing Unit

CPU = Central Processing Unit

GPU

GPU : dedicated graphics rendering device

A GPU implements a number of graphics primitive operations in a way that makes running them much faster than drawing directly to the screen with the host CPU.

CPU


Performance Evolution :


Early Hardware History

1996 : 3DFX Voodoo

This was the first PC commercial GPU

1999 : Nvidia GeForce 256

The card included hardware support for

Transform and Lighiting.

Not programable


Hardware History

  • 2000

  • Card(s) on the market: GeForce 2, Radeon 128, WildCat, and Oxygen GVX1

  • These cards did not use any programmability within their pipeline. There were no vertex and pixel shaders or even texture shaders. The only programmatically think was the register combiners. Multi-texturing and additive blending were used to create clever effects and unique materials.


2001

  • Card(s) on the market: GeForce 3

  • With GeForce 3, NVIDIA introduced programmability into the vertex processing pipeline, allowing developers to write simple 'fixed-length' vertex programs using pseudo-assembler style code. Pixel processing was also improved with the texture shader, allowing more control over textures. Developers could now interpolate, modulate, replace, and decal operations between texture units, as well as extrapolate or combine with constant color


2002

  • Card(s) on the market: GeForce 4

  • NVIDIA's GeForce 4 series had great improvements in both the vertex and the pixel stages. It was now possible to write longer vertex programs, allowing the creation of more complex vertex shaders.


2003

  • Card(s) on the market: GeForce FX, Radeon 9500/9800, and WildCat VP

  • The GeForce FX and Radeon 9500 cards introduced 'real' pixel and vertex shaders, which could use variable lengths and conditionals. Higher-level languages were also introduced around the same time, replacing the asm-based predecessors. All stages within the pixel and vertex pipeline were now fully programmable (with a few limitations).


2003 Continued

  • With the creation of GLSL, graphics cards could take advantage of a high level language for shaders. With a good compiler, loops and branches could be simulated within hardware that natively didn't support them. Many functions were also introduced, creating a standard library, and subroutines were added; GLSL pushed the hardware to its limits.


2003 Continued

  • 3Dlabs shipped their WildCat VP cards, which allowed for 'true' vertex and fragment (pixel) shaders with loops and branching, even in fragment shaders. These were the first cards to fully support the OpenGL Shading Language (GLSL).


2003 Continued

  • Until now, all vertex and pixel programming was done using a basic asm-based language called 'ARB_fp' (for fragment programs) or 'ARB_vp' (for vertex programs). Programs written in this language were linear, without any form of flow control or data structure. There were no sub-routines and no standard library (containing common functions). It basically processed arithmetic operations and texture access, and nothing more.


2004

  • Card(s) on the market: WildCatRealizm, GeForce 6, and ATI x800 cards

  • These cards are the latest generation of programmable graphics hardware. They support a higher subset of GLSL, including direct texture access from vertex shaders, large program support, hardware-based noise generation, variable-length arrays, indirect indexing, texture dependent reading, sub-routines, and a standard library for the most common functions (like dot, cross, normalise, sin, cos, tan, log, sqrt, length, reflect, refract, dFdx, dFdy, etc.). They can also use a long list of built-in variables to access many OpenGL states (like gl_LightSource[n].position, gl_TexCoord[n], gl_ModelViewMatrix, gl_ProjectionInverseMatrix, etc.). Data structures are supported as well through C-like structs.


2004

  • Card(s) on the market: WildCatRealizm, GeForce 6, and ATI x800 cards

  • These cards are the latest generation of programmable graphics hardware. They support a higher subset of GLSL, including direct texture access from vertex shaders, large program support, hardware-based noise generation, variable-length arrays, indirect indexing, texture dependent reading, sub-routines, and a standard library for the most common functions (like dot, cross, normalise, sin, cos, tan, log, sqrt, length, reflect, refract, dFdx, dFdy, etc.). They can also use a long list of built-in variables to access many OpenGL states (like gl_LightSource[n].position, gl_TexCoord[n], gl_ModelViewMatrix, gl_ProjectionInverseMatrix, etc.). Data structures are supported as well through C-like structs.


Enter the Nvidia GeForce 8800

  • Contains full support for DirectX 10 and Opengl 2.0

  • Fully unified shader core dynamically allocates processing power to geometry, vertex, physics, or pixel shading operations, delivering up to 2x the gaming performance of prior generation GPUs.

  • From our perspective it does almost anything we want (for the time being anyway)


Fixed Methods

  • These fixed methods allowed the programmer to display many basic lighting models and effects, like light mapping, reflections, and shadows (always on a per-vertex basis) using multi-texturing and multiple passes. This was done by essentially multiplying the number of vertices sent to the graphic card (two passes = x2 vertices, four passes = x4 vertices, etc.), but it ended there.


Fixed Pipeline Continued

  • Fragment: During the rasterization stage, the pipeline breaks primitives into pixel-sized chunks called fragments. A fragment is a piece of a primitive that may eventually effect a pixel( that which is actually written to the color buffer) after it is depth tested, alpha tested, blended, combined with a texture and combined with other fragments.

  • The fixed fragment stage handled tasks such as interpolate values (colors and texture coordinates), texture access, texture application (environment mapping and cube mapping), fog, and all other per-fragment computations.


b) The OpenGL Graphics Pipeline


What are shaders ?

  • Shaders substitute parts of the graphics pipeline

  • The Transform and lighting phase is now programmable using Vertex Shaders

  • Pixel (or fragment) shaders programs substitute the Color and Pixel coordinates phase


c) Requirements for GLSL

Hardware Requirement :

Shaders are available since the GeForce 3 and Radeon 8500 with OpenGL 1.4

However, to use all spécifications of OpenGL 2.0, a GeForce 5(FX) or a Radeon 9500 is highly recommended.

Software Requirement :

To code in GLSL, a C editor is necessary : Visual C++, CodeBlocks, DevC++ …

You need additionnal libraries : Glut and Glew packages give all main functions

Be sure your graphics card drivers are up to date


II Code Shaders in GLSL

a) OpenGL Shading language

  • GLSL = OpenGL Shading Language

  • 3 main Shading Language

  • HLSL, uses Direct3D API, made in Microsoft

  • GLSL, uses OpenGL API.

  • Cg, NVidia Shading Language, Independant


Transform and Lighting (T&L)

  • Custom transform, lighting, and skinning


A special T&L shader?

  • Custom cartoon-style lighting


Implement high speed bump mapping

  • Per-vertex set up for per-pixel bump mapping


Dynamic morphing

  • Character morphing & shadow volume projection


Dynamic mesh deformation

  • Dynamic displacements of surfaces by objects


What can be done in a vertex shader ?

  • Complete control of transform and lighting HW

  • Complex vertex operations accelerated in HW

  • Custom vertex lighting

  • Custom skinning and blending

  • Custom texture coordinate generation

  • Custom texture matrix operations

  • Custom vertex computations of your choice

  • Offloading vertex computations frees up CPU

    • More physics, simulation, and AI possible.


GLSL

  • GLSL (OpenGL Shading Language), also known as GLslang, is a high levelshading language based on the C programming language. It was created by the OpenGL ARB to give developers more direct control of the graphics pipeline without having to use assembly language or hardware-specific languages. OpenGL 2.0 is now available.


GLSL

  • Originally introduced as an extension to OpenGL 1.4, the OpenGL ARB formally included GLSL into the OpenGL 2.0 core. OpenGL 2.0 is the first major revision to OpenGL since the creation of OpenGL 1.0 in 1992.

    Some benefits of using GLSL are:

  • Cross platform compatibility on multiple operating systems, including Linux, Mac OS and Windows.

  • The ability to write shaders that can be used on any hardware vendor’s graphics card that supports the OpenGL Shading Language.

  • Each hardware vendor includes the GLSL compiler in their driver, thus allowing each vendor to create code optimized for their particular graphics card’s architecture.


GLSL Data Types

Data types

The OpenGL Shading Language Specification defines 22 basic data types. Some are the same as used in the C programming language, while others are specific to graphics processing.

  • void – used for functions that do not return a value

  • bool – conditional type, values may be either true or false

  • int – a signed integer

  • float – a floating point number

  • vec2 – a 2 component floating point vector ,v ec3, vec4

  • bvec2 – a 2 component Boolean vector ,bvec3, bvec4

  • ivec2 – a 2 component vector of integers ,ivec3 ,ivec4

  • mat2 – a 2X2 matrix of floating point numbers ,mat3,mat4

  • sampler1D – a handle for accessing a texture with 1 dimension ,sampler2D,sampler3D


Functions and Control Structures

Similar to the C programming language, GLSL supports loops and branching, including if, else, if/else, for, do-while, break, continue, etc.

User defined functions are supported, and a wide variety of commonly used functions are provided built-in as well. This allows the graphics card manufacturer the ability to optimize these built in functions at the hardware level if they are inclined to do so. Many of these functions are similar to those found in the C programming language such as exp() and abs() while others are specific to graphics texture2D().


Variables

Declaring and using variables in GLSL is similar to using variables in C.

There are four options for variable qualifiers:

  • const - A constant.

  • varying - Read-only in a fragment shader, readable/writable in the vertex shader. Used to communicate between the two shaders. The values are interpolated and fed to the fragment shader.

  • uniform - Global read-only variables available to vertex and fragment shaders which can't be changed inside glBegin() and glEnd() calls. Used to hold state that changes between objects.

  • attribute - Global read-only variables only available to the vertex shader (and not the fragment shader). Passed from an OpenGL program, these can be changed on a per vertex level.


Free Tools

  • RenderMonkey - created by ATI, provides an interface to create, compile and debug GLSL shaders as well as DirectX shaders. It runs only on Microsoft Windows.

  • GLSLEditorSample - a cocoa application running only under Mac OS X. It allows shader creation and compilation, but no debugging is implemented. It is part of the Xcode package, versions 2.3 and above.

  • Lumina - a new GLSL development tool. It is platform independent and the interface uses Qt.

  • Blender - This GPL 3D modeling and animation package contains GLSL support in its game engine, as of version 2.41.

  • Shader Designer - This extensive, easy to use GLSL IDE has ceased production by TyphoonLabs. However, it can still be downloaded and used for free. Shader designer comes with example shaders and beginner tutorial documentation.

  • Demoniak3D - a tool that allows to quickly code and test your GLSL shaders. Demoniak3D uses a mixture of XML, LUA scripting and GLSL to build a real time 3d scenes.


The Vertex Processor

The vertex processor is responsible for running the vertex shaders. The input for a vertex shader is the vertex data, namely its position, color, normals, etc, depending on what the OpenGL application sends.

glBegin(...);

glColor3f(0.2,0.4,0.6);

glVertex3f(-1.0,1.0,2.0);

glColor3f(0.2,0.4,0.8);

glVertex3f(1.0,-1.0,2.0);

glEnd();


Input and Output :


The Fragment Processor

  • The fragment processor is where the fragment shaders run. This unit is responsible for operations like:

  • Computing colors, and texture coordinates per pixel

  • Texture application

  • Fog computation

  • Computing normals if you want lighting per pixel

  • The inputs for this unit are the interpolated values computed in the previous stage of the pipeline such as vertex positions, colors, normals, etc... In the vertex shader these values are computed for each vertex.


So Lets write a shader

  • We will assume for the time being that we can figure out how to load these guys

  • The first shader we will write will include a vertex shader and a fragment shader

  • It is possible to write only one and let OpenGL fixed function pipeline handle the other.


The Simplest Example Possible

// This is the vertex shader. Processes each vertex that passes thru

void main( void )

{

gl_Position = ftransform();

}

ftransform() is a built in function that will transform the incomming vertex position in a way that produces exactly the same result as would be produced by OpenGL's fixed functionality transform. Should be only used to calculate gl_position. This is equivalent to multiplying the vertex by the current modelview and projection matrices.


Fragment Shader

// This fragment shader sets every fragment to a solid color

// There is not lighting, textures etc in this shader.

void main( void )

{

gl_FragColor = vec4(0.4, 0.0, 0.9, 1.0);

}

Every single fragment is colored the same purple color. No lighting

or anything else is done. A sphere is drawn on the right as a demo. This program is entitled ShadeSphere1.


A Simple Texture Example(uses a varying variable)

// Texcoord is used to communicate with the fragment shader using

// interpolation across the primitives.

varying vec2 Texcoord;

//Gl_MultiTexCoord0.xy is a built in attribute variable that is set in the //OpenGL app via the glTexCoord2f() command

void main( void )

{

gl_Position = ftransform();

Texcoord = gl_MultiTexCoord0.xy;

}


And the Fragment Part(uses a uniform variable)

// The variable baseMap must be defined as a texture

// in the application.

uniform sampler2D baseMap

// This value is interpolated across the primitive.

varying vec2 Texcoord

void main( void )

{

gl_FragColor = texture2D( baseMap, Texcoord );

}

The above shader is cubeshader2 on my web site.


First go an get GLee

First in order to get things working properly we need to download and use a support library called GLee.GLee (GL Easy Extension library) is a free cross-platform extension loading library for OpenGL. It provides seamless support for OpenGL functions up to version 2.1 and over 360 extensions.

It contains OpenGL 2.1 support, plus 22 new extensions, including NVidia's GeForce 8 series extensions.

Don’t forget to include these in your program.

#pragma comment( lib, "glee.lib")

#include <GL/glee.h>

WEBSITE:http://elf-stone.com/glee.php


The Loading process

Create a shader object

Load the shader code into the object

Compile the code for that object

Repeat as needed for the other shaders

Create a program object

Attach all the relevant shader objects to the program object

Ask openGL to link the program object


So how do we load these guys?

There are several functions used to create, compile and load the code onto the graphics card. We will look at each in turn. First we must create a shader object. We need one for the vertex shader and one for the fragment shader. The following command create an empty shader object and returns its handle.

GLuint vtxshader,fragshader;

vtxshader=glCreateShader(GL_VERTEX_SHADER);

fragshader=glCreateShader(GL_FRAGMENT_SHADER);


Define the shader sources

Once the shader object had been created we need to load the shader source into the object. This is done using the function glShaderSource(), which takes an array of strings representing the shader and makes a copy of it to store in the object. Note that shaders are loaded from strings, files. We will use a utility to read in a file and put it into the form required for this function. See the two sample apps on my web page for the loader code (ShaderLoader.cpp) and examples how to use it.

glShaderSource(vtxshader,1, &cvs_source ,NULL);

glShaderSource(fragshader,1, &cfs_source ,NULL);


Let compile the shaders

glCompileShader(vtxshader); // Compile the vertex shader

glGetShaderiv(vtxshader,GL_COMPILE_STATUS,&success);

if(!success)cout<<"Vertex Compiler Error"<<endl;

glCompileShader(fragshader); //compile the fragment shader

glGetShaderiv(fragshader,GL_COMPILE_STATUS,&success);

if(!success)cout<<"Fragment Compiler Error"<<endl;

If the above compilations crash the correspoinding shader will not be loaded. This will result in the fixed portion of the pipeline being used instead. You can also retrieve a log containing errors and other information from OpenGL using glGetShaderInfoLog();


Create a program

We now will create a program object and attach the shaders

to it. Normally you attach the vertex shader and a fragment shader although you can actually attach more shaders of either type.

prog=glCreateProgram();

glAttachShader(prog,vtxshader);

glAttachShader(prog,fragshader);


And then link it

glLinkProgram(prog);

glGetShaderiv(vtxshader,GL_LINK_STATUS,&success);

if(!success)cout<<"Vertex LINK Error"<<endl;

glGetShaderiv(fragshader,GL_LINK_STATUS,&success);

if(!success)cout<<"Fragment LINK Error"<<endl;

See page 124 in your textbook for possible reasons that the link can fail.

If the link operation is successful, then user-defined uniform variables are initialized to zero, and they will be assigned a location so that they can be set by the relevant OpenGL functions,


And Validate it

glValidateProgram(prog);

glGetShaderiv(vtxshader,GL_VALIDATE_STATUS,&success);

if(!success)cout<<"Vertex VALIDATE Error"<<endl;

glGetShaderiv(fragshader,GL_VALIDATE_STATUS,&success);

if(!success)cout<<"Fragment VALIDATE Error"<<endl;

This function checks to see if the program specified by the handle can be run in its current state. This also checks thaings such as the correct binding of samplers and other states. Use this only during debugging.


Load the program

glUseProgram(prog);

If you run the command glUseProgram(0), ie prog is 0 then the shader processing is disables and fixed functionality takes over.

Note: I generally put all the shader code within a #ifdef-#endif preprocessor block so that you can easily turn on or off the inclusion of the shader code. See my examples.


Shader #3: Uniform Variables

// Vertex Shader

uniform vec4 scale;

void main()

{

vec4 pos = gl_Vertex * scale;

gl_Position = gl_ModelViewProjectionMatrix * pos;

}

The above vertex shader reads as input a uniform variable called scale. It then uses this variable to expand or shrink the position of the vertex wrt the origin. The new position is then converted to clip coordinates using the usual ModelView matrix and Projection matrix transforms.


The Fragment Shader

// Fragment Shader

uniform vec4 color;

void main()

{

gl_FragColor = color;

}

This shader reads in the color uniform variable and sets the outgoing fragment to that color.


Shader #4

This is another simple shader that does the following.

1. It shades by interpolating the diffuse color only.

2. It uses the texture matrix to slide the texture.


Shader#4:Vertex Shader

// This is our directional light vertex shader

varying vec4 diffuse; //interpolate the diffuse color

void main( void )

{

gl_Position = ftransform();

gl_TexCoord[0]=gl_TextureMatrix[0]*gl_MultiTexCoord0;

vec3 normal = gl_Normal;// Normals are expected to be normalized

vec3 lightVector=normalize(gl_LightSource[0].position.xyz);

float nxDir = max(0.0,dot(normal, lightVector

diffuse = gl_LightSource[0].diffuse*nxDir;

}


Shader#4:Fragment Shader

// The variable texture must be defined as a texture

// in the application.

uniform sampler2D texture;

varying vec4 diffuse; // this is what get interpolated

void main( void )

{

vec4 texColor = texture2D(texture, gl_TexCoord[0].st);

gl_FragColor = (gl_LightSource[0].ambient + diffuse)*texColor;

}


Shader#4:OpenGL Feed

The command

uniform sampler2D texture

needs to be fed from the client side using

the usual texture setup method. The names

must match.

glGenTextures(1,&texture);

glBindTexture(GL_TEXTURE_2D, texture);


Simple Toon Shading

This is just a shading trick that allows us to color objects in discrete sections.


The Vertex Toon Shader

varying vec3 normal;

varying float edge;

uniform vec3 Camera_Position; // aka eye position

void main( void )

{

normal = gl_NormalMatrix*gl_Normal;

gl_Position = gl_ModelViewMatrix * gl_Vertex;

vec3 M = gl_Position.xyz;

vec3 n = normalize(normal);

vec3 cameraVector = normalize(Camera_Position - M);

edge = max(dot(cameraVector,n),0);

gl_Position=ftransform();

}


Fragment Shader

Note the separation of colors is a hardcoded function of the intensity.

The intensity is a value that is highly dependent on the position of the light source. In addition the light pos is not normalized in this case!

Here we just artificially colored the object based on the angle between the normal and the light vector

varying vec3 normal;

varying float edge;

void main( void )

{

float intensity; vec4 color;

vec3 n=normalize(normal);

intensity = dot(vec3(gl_LightSource[0].position),n);

if(edge<.4)

color = vec4(0.0,0.0,0.0,1.0);

else if (intensity >= .95)

color = vec4(0.6,0.6,1.0,1.0);

else if (intensity > .85)

color = vec4(0.4,0.4,0.8,1.0);

else if (intensity > .7)

color = vec4(0.3,0.3,0.6,1.0);

else

color = vec4(0.2,0.2,0.4,1.0);

gl_FragColor = color;

}


Bump Mapping


Conclusion

  • Best Choice for coding High Level Graphics

  • More graphics effects available

  • CPU available for other tasks

  • Language in constant evolution

  • New possibilities as Geometry Shader


ad
  • Login