illumination and shading l.
Download
Skip this Video
Download Presentation
Illumination and Shading

Loading in 2 Seconds...

play fullscreen
1 / 99

Illumination and Shading - PowerPoint PPT Presentation


  • 128 Views
  • Uploaded on

Illumination and Shading. Lights Diffuse and Specular Illumination BasicEffect Setting and Animating Lights Shaders and HLSL Lambertian Illumination Pixel Shaders Textures. The XNA graphics pipeline. Multiply by World Matrix. Effect. Multiply by View Matrix. Vertex Shader.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Illumination and Shading' - tamra


Download Now An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
illumination and shading
Illumination and Shading

Lights

Diffuse and Specular Illumination

BasicEffect

Setting and Animating Lights

Shaders and HLSL

Lambertian Illumination

Pixel Shaders

Textures

the xna graphics pipeline
The XNA graphics pipeline

Multiply by

World Matrix

Effect

Multiply by

View Matrix

Vertex Shader

Compute Color

Vertex position

Vertex normal

Texture coordinate

Vertex color

other stuff sometimes…

Multiply by

Projection Matrix

Homogenize and

Clip

Shade

Pixel Shader

some terms
Some Terms

Illumination – What gives a surface its color. This is what our effect will compute.

Material – Description of the surface. Will include one or more colors. These are parameters set in the effect.

Reflection – The reflection of light from a surface. This is what we will simulate to compute the illumination.

Shading – Setting the pixels to the illumination

We’ll often use reflection and illumination just about interchangeably.

my first effect firsteffect fx
My First Effect: FirstEffect.fx

float4x4 World;

float4x4 View;

float4x4 Projection;

struct VertexShaderInput

{

float4 Position : POSITION0;

};

struct VertexShaderOutput

{

float4 Position : POSITION0;

};

VertexShaderOutput VertexShaderFunction(VertexShaderInput input)

{

VertexShaderOutput output;

float4 worldPosition = mul(input.Position, World);

float4 viewPosition = mul(worldPosition, View);

output.Position = mul(viewPosition, Projection);

return output;

}

float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0

{

return float4(1, 0, 0, 1);

}

technique FirstShader

{

pass Pass1

{

VertexShader = compile vs_1_1 VertexShaderFunction();

PixelShader = compile ps_1_1 PixelShaderFunction();

}

}

This is the default effect Visual Studio will create when you do New/Effect. It does nothing except set pixels to red.

what this does
What this does

Anywhere there is something to draw, it draws red

Always something to draw other than through the window.

how to use this
How to use this…

firstEffect = Content.Load<Effect>("FirstEffect");

foreach (ModelMesh mesh in Bedroom.Meshes)

{

foreach (ModelMeshPart part in mesh.MeshParts)

{

part.Effect = firstEffect;

}

}

Loading/Installing

in LoadContent()

Drawing

protected void DrawModel(GraphicsDevice graphics, Camera camera, Model model, Matrix world)

{

Matrix[] transforms = new Matrix[model.Bones.Count];

model.CopyAbsoluteBoneTransformsTo(transforms);

foreach (ModelMesh mesh in model.Meshes)

{

foreach (Effect effect in mesh.Effects)

{

effect.Parameters["World"].SetValue(transforms[mesh.ParentBone.Index] * world);

effect.Parameters["View"].SetValue(camera.View);

effect.Parameters["Projection"].SetValue(camera.Projection);

}

mesh.Draw();

}

}

what does what
What does what…

effect.Parameters["World"].SetValue(transforms[mesh.ParentBone.Index] * world);

effect.Parameters["View"].SetValue(camera.View);

effect.Parameters["Projection"].SetValue(camera.Projection);

Sets

float4x4 World;

float4x4 View;

float4x4 Projection;

Setting the effect parameter sets the equivalent value inside the effect.

the process
The process

VertexShaderOutput VertexShaderFunction(VertexShaderInput input)

{

VertexShaderOutput output;

float4 worldPosition = mul(input.Position, World);

float4 viewPosition = mul(worldPosition, View);

output.Position = mul(viewPosition, Projection);

return output;

}

The Vertex Shader runs first

It converts object vertex coordinates into projected coordinates.

The Pixel shader runs next.

It computes the actual pixel color

Once per vertex

float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0

{

return float4(1, 0, 0, 1);

}

Once per pixel

adding a material property
Adding a Material Property

I added this line to the veretx shader

And changed the pixel shader to this

// This is the surface diffuse color

float3 DiffuseColor = float3(0, 0, 0);

float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0

{

return float4(DiffuseColor, 1);

}

setting this effect
Setting this Effect

private void SetDiffuseColorEffect()

{

foreach (ModelMesh mesh in Bedroom.Meshes)

{

foreach (ModelMeshPart part in mesh.MeshParts)

{

BasicEffect bEffect = part.Effect as BasicEffect;

part.Effect = diffuseColorEffect.Clone(part.Effect.GraphicsDevice);

part.Effect.Parameters["DiffuseColor"].SetValue(bEffect.DiffuseColor);

}

}

}

The default content processing supplied a BasicEffect object with the color set in it. We’re creating our own effect and setting the color to match what was loaded.

Note the “Clone”. This makes a copy of the effect. Since we’re putting a material property into it, we need a unique copy here.

what this does12
What this does

Each pixel is simply set to the material diffuse color. No lighting is included, yet.

now let s do real lighting
Now let’s do real lighting

VertexShaderOutput VertexShaderFunction(VertexShaderInput input)

{

VertexShaderOutput output;

float4 worldPosition = mul(input.Position, World);

float4 viewPosition = mul(worldPosition, View);

output.Position = mul(viewPosition, Projection);

float3 color = LightAmbient * DiffuseColor;

output.Color = float4(color, 1);

return output;

}

float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0

{

return input.Color;

}

technique FirstShader

{

pass Pass1

{

VertexShader = compile vs_1_1 VertexShaderFunction();

PixelShader = compile ps_1_1 PixelShaderFunction();

}

}

float4x4 World;

float4x4 View;

float4x4 Projection;

// This is the surface diffuse color

float3 DiffuseColor = float3(0, 0, 0);

// Light definition

float3 LightAmbient = float3(0.07, 0.1, 0.1);

struct VertexShaderInput

{

float4 Position : POSITION0;

};

struct VertexShaderOutput

{

float4 Position : POSITION0;

float4 Color : COLOR0;

};

I’ve moved our computation to the vertex shader (why?). Ambient illumination is simply the light ambient color times the surface diffuse color.

some things
Some Things

VertexShaderOutput VertexShaderFunction(VertexShaderInput input)

{

VertexShaderOutput output;

float4 worldPosition = mul(input.Position, World);

float4 viewPosition = mul(worldPosition, View);

output.Position = mul(viewPosition, Projection);

float3 color = LightAmbient * DiffuseColor;

output.Color = float4(color, 1);

return output;

}

float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0

{

return input.Color;

}

technique FirstShader

{

pass Pass1

{

VertexShader = compile vs_1_1 VertexShaderFunction();

PixelShader = compile ps_1_1 PixelShaderFunction();

}

}

float4x4 World;

float4x4 View;

float4x4 Projection;

// This is the surface diffuse color

float3 DiffuseColor = float3(0, 0, 0);

// Light definition

float3 LightAmbient = float3(0.07, 0.1, 0.1);

struct VertexShaderInput

{

float4 Position : POSITION0;

};

struct VertexShaderOutput

{

float4 Position : POSITION0;

float4 Color : COLOR0;

};

Note that the VertexShaderOutput now has a color.

TT

what this looks like
What this looks like

Inset has brightness artificially increased in Photoshop

now for diffuse illumination
Now for Diffuse Illumination
  • What we need to know
  • Light location in space
  • Light color

float3 Light1Location = float3(5, 221, -19);

float3 Light1Color = float3(1, 1, 1);

hlsl code
HLSL code

// Light definition

float3 LightAmbient = float3(0.07, 0.1, 0.1);

float3 Light1Location = float3(5, 221, -19);

float3 Light1Color = float3(1, 1, 1);

struct VertexShaderInput

{

float4 Position : POSITION0;

float3 Normal : NORMAL0;

};

struct VertexShaderOutput

{

float4 Position : POSITION0;

float4 Color : COLOR0;

};

We need to know the normal

vertex shader
Vertex Shader

Light1Location

VertexShaderOutput VertexShaderFunction(VertexShaderInput input)

{

VertexShaderOutput output;

// We need the position and normal in world coordinates

float4 position = mul(input.Position, World);

float3 normal = normalize(mul(input.Normal, World));

// Ambient lighting hitting the location

float3 color = LightAmbient;

// Compute direction to the light

float3 Light1Direction = normalize(Light1Location - position);

// Add contribution due to this light

color += saturate(dot(Light1Direction, normal)) * Light1Color;

// Multiply by material color

color *= DiffuseColor;

output.Color = float4(color.x, color.y, color.z, 1);

float4 viewPosition = mul(position, View);

output.Position = mul(viewPosition, Projection);

return output;

}

Light1Location - position

Light1Direction

normal

position

what this looks like19
What this looks like

What is missing?

texture mapping
Texture Mapping

Mapping a picture onto the surface of a triangle.

hlsl adding a texture variable
HLSL – Adding a texture variable

// The texture we use

texture Texture;

Not everything in our scene is texture mapped. If it is texture mapped, we use the texture color as the diffuse color. If not, we use the set diffuse color. We have two different ways we will compute the color.

float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0

{

return input.Color * float4(DiffuseColor, 1);;

}

I’ll move the multiplication by the Diffuse Color to the pixel shader.

slightly modified version now
Slightly modified version, now.

VertexShaderOutputVertexShaderFunction(VertexShaderInput input)

{

VertexShaderOutput output;

// We need the position and normal in world coordinates

float4 position = mul(input.Position, World);

float3 normal = normalize(mul(input.Normal, World));

// Ambient lighting hitting the location

float3 color = LightAmbient;

// Compute direction to the light

float3 Light1Direction = normalize(Light1Location - position);

// Add contribution due to this light

color += saturate(dot(Light1Direction, normal)) * Light1Color;

// Multiply by material color

color *= DiffuseColor;

output.Color = float4(color, 1);

float4 viewPosition = mul(position, View);

output.Position = mul(viewPosition, Projection);

return output;

}

float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0

{

return input.Color * float4(DiffuseColor, 1);

}

we need something called a sampler
We need something called a “sampler”

sampler Sampler = sampler_state

{

Texture = <Texture>;

MinFilter = LINEAR;

MagFilter = LINEAR;

AddressU = Wrap;

AddressV = Wrap;

AddressW = Wrap;

};

This just parameterizes how we get pixels from our texture. Wrap means tiling will be used. LINEAR means linear interpolation. We’ll do some other options later.

and we need to have the texture coordinates
And, we need to have the texture coordinates

struct VertexShaderInput

{

float4 Position : POSITION0;

float3 Normal : NORMAL0;

float2 TexCoord : TEXCOORD0;

};

struct VertexShaderOutput

{

float4 Position : POSITION0;

float4 Color : COLOR0;

float2 TexCoord : TEXCOORD0;

};

And add this to the vertex shader function:

output.TexCoord = input.TexCoord;

techniques
Techniques

We have two ways things will be treated. We could create two different effects. But, it’s easier to create one effect with two different “techniques”.

float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0

{

return input.Color * float4(DiffuseColor.x, DiffuseColor.y, DiffuseColor.z, 1);

}

technique NoTexture

{

pass Pass1

{

VertexShader = compile vs_1_1 VertexShaderFunction();

PixelShader = compile ps_1_1 PixelShaderFunction();

}

}

Here is the technique we had before.

techniques26
Techniques

float4 PixelShaderTexturedFunction(VertexShaderOutput input) : COLOR0

{

return input.Color * tex2D(Sampler, input.TexCoord);

}

technique Textured

{

pass Pass1

{

VertexShader = compile vs_1_1 VertexShaderFunction();

PixelShader = compile ps_1_1 PixelShaderTexturedFunction();

}

}

This is the other technique

slide27

float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0

{

return input.Color * float4(DiffuseColor.x, DiffuseColor.y, DiffuseColor.z, 1);

}

float4 PixelShaderTexturedFunction(VertexShaderOutput input) : COLOR0

{

return input.Color * tex2D(Sampler, input.TexCoord);

}

technique NoTexture

{

pass Pass1

{

VertexShader = compile vs_1_1 VertexShaderFunction();

PixelShader = compile ps_1_1 PixelShaderFunction();

}

}

technique Textured

{

pass Pass1

{

VertexShader = compile vs_1_1 VertexShaderFunction();

PixelShader = compile ps_1_1 PixelShaderTexturedFunction();

}

}

All in one place so you can see it.

loading this effect
Loading This Effect

private void SetLambertianTextureEffect()

{

foreach (ModelMesh mesh in Bedroom.Meshes)

{

foreach (ModelMeshPart part in mesh.MeshParts)

{

part.Effect = effectsOrig[part];

BasicEffect bEffect = part.Effect as BasicEffect;

part.Effect = lambertianTextureEffect.Clone(part.Effect.GraphicsDevice);

part.Effect.Parameters["DiffuseColor"].SetValue(bEffect.DiffuseColor);

part.Effect.Parameters["Texture"].SetValue(bEffect.Texture);

if (bEffect.Texture != null)

{

part.Effect.CurrentTechnique = part.Effect.Techniques["Textured"];

}

else

{

part.Effect.CurrentTechnique = part.Effect.Techniques["NoTexture"];

}

}

}

}

the complete effect variables and types
The complete effect – Variables and types

float4x4 World;

float4x4 View;

float4x4 Projection;

// This is the surface diffuse color

float3 DiffuseColor = float3(0, 0, 0);

texture Texture;

// Light definition

float3 LightAmbient = float3(0.07, 0.1, 0.1);

float3 Light1Location = float3(5, 221, -19);

float3 Light1Color = float3(1, 1, 1);

struct VertexShaderInput

{

float4 Position : POSITION0;

float3 Normal : NORMAL0;

float2 TexCoord : TEXCOORD0;

};

struct VertexShaderOutput

{

float4 Position : POSITION0;

float4 Color : COLOR0;

float2 TexCoord : TEXCOORD0;

};

sampler Sampler = sampler_state

{

Texture = <Texture>;

MinFilter = LINEAR;

MagFilter = LINEAR;

AddressU = Wrap;

AddressV = Wrap;

AddressW = Wrap;

};

the complete effect vertex shader
The complete effect – Vertex shader

VertexShaderOutput VertexShaderFunction(VertexShaderInput input)

{

VertexShaderOutput output;

output.TexCoord = input.TexCoord;

// We need the position and normal in world coordinates

float4 position = mul(input.Position, World);

float3 normal = normalize(mul(input.Normal, World));

// Ambient lighting hitting the location

float3 color = LightAmbient;

// Compute direction to the light

float3 Light1Direction = normalize(Light1Location - position);

// Add contribution due to this light

color += max(dot(Light1Direction, normal), 0) * Light1Color;

output.Color = float4(color.x, color.y, color.z, 1);

float4 viewPosition = mul(position, View);

output.Position = mul(viewPosition, Projection);

return output;

}

the complete effect pixel shader and techniques
The complete effect = Pixel Shader and Techniques

float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0

{

return input.Color * float4(DiffuseColor.x, DiffuseColor.y, DiffuseColor.z, 1);

}

float4 PixelShaderTexturedFunction(VertexShaderOutput input) : COLOR0

{

return input.Color * tex2D(Sampler, input.TexCoord);

}

technique NoTexture

{

pass Pass1

{

VertexShader = compile vs_1_1 VertexShaderFunction();

PixelShader = compile ps_1_1 PixelShaderFunction();

}

}

technique Textured

{

pass Pass1

{

VertexShader = compile vs_1_1 VertexShaderFunction();

PixelShader = compile ps_1_1 PixelShaderTexturedFunction();

}

}

specular illumination
Specular Illumination

Specular Reflections

Reflections from the surface of the material

Generally a different color than the underlying material diffuse color

more specular examples
More Specular Examples

Specular Reflection makes things appear shiny

The left Dalek has no specular illumination, the right does

previous lighting components
Previous lighting components

Ambient

Ia=Ambient part of illumination

Md=Diffuse material color

Ca=Light ambient color

Diffuse Illumination

Id=Diffuse part of illumination

Md=Diffuse material color

Ci=Color of light I

Li=Vector pointing at light I

N=Surface normal

float3 color = LightAmbient;

return input.Color * float4(DiffuseColor, 1);

color += max(dot(Light1Direction, normal), 0) *

Light1Color;

return input.Color * float4(DiffuseColor, 1);

specular component
Specular Component

Specular Component

Is=Specular part of illumination

Ms=Specular material color

Ci=Color of light I

N=Surface normal

H=“Half vector”

n=Shininess or SpecularPower

Basic HLSL code

// Add specular contribution due to this light

float3 V = normalize(Eye - position);

float3 H = normalize(Light1Direction + V);

scolor += pow(saturate(dot(normal, H)), Shininess) * Light1Color;

specular reflection highlight coefficient
Specular Reflection Highlight Coefficient
  • The term n is called the specular reflection highlight coefficient or “Shininess” or “Specular Power”
  • This effects how large the spectral highlight is. A larger value makes the highlight smaller and sharper.
    • This is the “shininess” factor in OpenGL, SpecularPower in XNA
    • Matte surfaces has smaller n.
    • Very shiny surfaces have large n.
    • A perfect mirror would have infinite n.
shininess examples
Shininess Examples

n=35

n=1

n=10

n=65

n=100

slide39

// This is the surface diffuse color

float3 DiffuseColor = float3(0, 0, 0);

float3 SpecularColor = float3(0, 0, 0);

float Shininess = 0;

texture Texture;

// Light definition

float3 LightAmbient = float3(0.07, 0.1, 0.1);

float3 Light1Location = float3(5, 221, -19);

float3 Light1Color = float3(1, 1, 1);

float3 Eye = float3(0, 0, 0);

structVertexShaderInput

{

float4 Position : POSITION0;

float3 Normal : NORMAL0;

float2 TexCoord : TEXCOORD0;

};

structVertexShaderOutput

{

float4 Position : POSITION0;

float4 Color : COLOR0;

float4 SColor : COLOR1;

float2 TexCoord : TEXCOORD0;

};

HLSL

New shader variables

vertex shader40
Vertex Shader

VertexShaderOutputVertexShaderFunction(VertexShaderInput input)

{

VertexShaderOutput output;

output.TexCoord = input.TexCoord;

float4 position = mul(input.Position, World);

float3 normal = normalize(mul(input.Normal, World));

// Ambient lighting hitting the location

float3 color = LightAmbient;

float3 scolor = 0;

// Compute direction to the light

float3 Light1Direction = normalize(Light1Location - position);

// Add diffuse contribution due to this light

color += max(dot(Light1Direction, normal), 0) * Light1Color;

// Add specular contribution due to this light

float3 V = normalize(Eye - position);

float3 H = normalize(Light1Direction + V);

scolor += pow(saturate(dot(normal, H)), Shininess) * Light1Color;

output.Color = float4(color, 1);

output.SColor = float4(scolor, 1);

float4 viewPosition = mul(position, View);

output.Position = mul(viewPosition, Projection);

return output;

}

Note that we send the specular illumination separate from the diffuse illumination. Any ideas why?

pixel shaders
Pixel Shaders

float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0

{

return input.Color * float4(DiffuseColor, 1) + input.SColor * float4(SpecularColor, 1);

}

float4 PixelShaderTexturedFunction(VertexShaderOutput input) : COLOR0

{

return input.Color * tex2D(Sampler, input.TexCoord) + input.SColor * float4(SpecularColor, 1);

}

Important: We only multiply the texture color by the diffuse illumination, not the specular illuminations. We can have another texture map for specular illumination sometimes.

those extra parameters
Those extra parameters

Setting up the effect:

BasicEffectbEffect = part.Effect as BasicEffect;

part.Effect = diffuseSpecularEffect.Clone(part.Effect.GraphicsDevice);

part.Effect.Parameters["DiffuseColor"].SetValue(bEffect.DiffuseColor);

part.Effect.Parameters["SpecularColor"].SetValue(bEffect.SpecularColor);

part.Effect.Parameters["Shininess"].SetValue(bEffect.SpecularPower);

When you draw:

effect.Parameters["Eye"].SetValue(camera.Eye);

phong shading or per pixel lighting
Phong Shading or Per-Pixel Lighting

All of the methods we have used computed the lighting at the vertex and interpolated the color between the vertices.

Phong Shading interpolates the normal over the surface and computes the color at every pixel.

Will always look better and the only way to do some effects light spotlights, but can be costly!

slide44
HLSL

structVertexShaderOutput

{

float4 Position : POSITION0;

float2 TexCoord : TEXCOORD0;

float4 WorldPosition : TEXCOORD1;

float3 Normal : TEXCOORD2;

};

We put the world position and the normal into “texture coordinates” because these are interpolated for the pixel shader.

vertex shader45
Vertex Shader

VertexShaderOutput VertexShaderFunction(VertexShaderInput input)

{

VertexShaderOutput output;

// We need the position and normal in world coordinates

output.TexCoord = input.TexCoord;

output.WorldPosition = mul(input.Position, World);

output.Normal = normalize(mul(input.Normal, World));

output.Position = mul(mul(output.WorldPosition, View), Projection);

return output;

}

This does surprisingly little!

pixel shader no texture version
Pixel Shader (no texture version)

float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0

{

float3 normal = input.Normal;

float4 position = input.WorldPosition;

// Ambient lighting hitting the location

float3 color = LightAmbient;

float3 scolor = 0;

// Compute direction to the light

float3 Light1Direction = normalize(Light1Location - position);

// Add diffuse contribution due to this light

color += max(dot(Light1Direction, normal), 0) * Light1Color;

// Add specular contribution due to this light

float3 V = normalize(Eye - position);

float3 H = normalize(Light1Direction + V);

scolor += pow(saturate(dot(normal, H)), Shininess) * Light1Color;

return float4(color * DiffuseColor + scolor * SpecularColor, 1);

}

Identical to Vertex Shader Version, just moved!

shader models
Shader Models

technique NoTexture

{

pass Pass1

{

VertexShader = compile vs_2_0VertexShaderFunction();

PixelShader = compile ps_2_0PixelShaderFunction();

}

}

A “shader model” specifies the capabilities we require. 2.0 is required to support using the texture coordinates this way. You also needed 2.0 for the many matrices in the skinned model.

other extremes of efficiency
Other extremes of efficiency

Graphics systems such as Maya and 3DS Max can precompute the color for every vertex in advance and save it with the vertex data. We call this Vertex Lighting.

vertex lighting
Vertex Lighting

VertexShaderOutputVertexShaderFunction(VertexShaderInput input)

{

VertexShaderOutput output;

output.Position = mul(input.Position, Matrix);

output.Color = input.Color;

output.TexCoord = input.TexCoord;

return output;

}

float4 PixelShaderTexture(PixelShaderInput input) : COLOR0

{

float4 color = input.Color;

float4 texColor = tex2D(Sampler, input.TexCoord);

color.rgba *= texColor;

return color;

}

float4 PixelShader(PixelShaderInput input) : COLOR0

{

return input.Color;

}

the ultimate extreme
The Ultimate Extreme

float4 PixelShaderTexture(PixelShaderInput input) : COLOR0

{

float4 color = input.Color;

float4 texColor = tex2D(Sampler, input.TexCoord);

color.rgba *= texColor;

return color;

}

float4 PixelShader(PixelShaderInput input) : COLOR0

{

return input.Color;

}

It’s possible to even avoid this bit of work and keeping the colors around. Instead, the graphics system creates a version of the texture with the lighting pre-multiplied into it. We call this Baked Textures.

baked textures example
Baked Textures Example

VertexShaderOutputVertexShaderFunction(VertexShaderInput input)

{

VertexShaderOutput output;

output.Position = mul(input.Position, Matrix);

output.TexCoord = input.TexCoord;

return output;

}

float4 PixelShader (PixelShaderInput input) : COLOR0

{

return tex2D(Sampler, input.TexCoord);

}

baked lighting
Baked Lighting

We refer to vertex lighting or baked textures as “baked lighting”, meaning the lighting is precomputed and built into the model as vertex colors or into the textures directly. Most all games use baked lighting extensively.

Advantages of baked lighting

Lighting model can be muchmore complex including as many lights as we want, complex light falloffs, radiosity, ray-tracing, shadows, anything we like!

Baked lighting is as fast as is possible.

Disadvantages of baked lighting

Only works for things that don’t move relative to lights.

Diffuse and ambient illumination only, no specular illumination (sometimes we bake the diffuse and ambient, then add the specular at runtime, though).

Can’t change the illumination at runtime (no sunsets, etc.)

incandescence
Incandescence

Incandescence is simply an added term for the color that represents light generated by the surface.

float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0

{

return input.Color * float4(DiffuseColor, 1) + float4(Incandescence, 1);

}

TT

render to texture
Render to Texture

Instead of rendering to the screen, we can render to a texture image.

Why would we want to?

private RenderTarget2D renderTarget = null;

if (renderTarget == null)

{

renderTarget = new RenderTarget2D(graphics, 512, 512, 1, graphics.DisplayMode.Format);

}

graphics.SetRenderTarget(0, renderTarget);

DrawActual(gameTime);

graphics.SetRenderTarget(0, null);

Texture2D rendering = renderTarget.GetTexture();

how about something in this scene56
How about something in this scene?

How would you do something like this?

mirrors
Mirrors

Center

Eye

notes
Notes

This example is simplified by putting the mirror parallel to the x/y plane (a single value of z).

We’ll assume an upright rectangular mirror.

We’ll assume the mirror is flat.

The first two assumptions can be overcome by just using a bit of math. The last requires a completely different method.

eye reflection
Eye Reflection

Suppose Eye=(-100, 204, 0). What would be the coordinates of the mirrored eye?

Mirrored Eye

Z=-248

Center

Eye

mirroring the eye around z 248
Mirroring the eye around z=-248

// How far are we from the mirror in the Z direction?

float zDist = camera.Eye.Z - MirrorPlaneZ;

// Create a mirrored camera

Camera mirrorCamera = new Camera();

mirrorCamera.Eye = new Vector3(camera.Eye.X, camera.Eye.Y, MirrorPlaneZ - zDist);

Suppose Eye=(-100, 204, 0). What would be the coordinates of the mirrored eye?

Mirrored eye = (-100, 204, -496)

x,y are the same, only z changes. It was zDist from the mirror, it is zDist away in the other direction, now.

which way will the camera face
Which way will the camera face?

Mirrored Eye

Seems like it would be facing a mirrored direction, right? Sorry, it’s not that simple…

Z=-248

?

Center

Eye

which way will the camera face62
Which way will the camera face?

Mirrored Eye

Would not project onto the mirror, but onto a plane at an angle to the mirror!

Z=-248

?

Center

Eye

remember frustums
Remember Frustums?

Mirrored Eye

Make the camera point directly at the wall, then use CreatePerspectiveOffCenter to create a custom camera frustum.

Z=-248

Center

Eye

creating a custom frustum

public static Matrix CreateOrthographicOffCenter (

floatleft, floatright, floatbottom, floattop, floatzNearPlane, floatzFarPlane )

Creating a custom frustum

top

Front view (from back)

left

right

bottom

Important, left, right, bottom, top MUST be in view coordinates, not world coordinates.

zNear

Top view

putting it all together
Putting it all together

// Determine the camera view direction

Vector3 cameraView = camera.Center - camera.Eye;

// Are we looking towards the mirror?

if (cameraView.Z >= 0)

return;

// How far are we from the mirror in the Z direction?

float zDist = camera.Eye.Z - MirrorPlaneZ;

// Create a mirror camera

Camera mirrorCamera = new Camera();

mirrorCamera.Eye = new Vector3(camera.Eye.X, camera.Eye.Y, MirrorPlaneZ - zDist);

mirrorCamera.Center = new Vector3(camera.Eye.X, camera.Eye.Y, MirrorPlaneZ);

mirrorCamera.Up = new Vector3(0, 1, 0);

the custom projection matrix
The Custom Projection Matrix

// Compute mirror corners in the view coordinate system

Vector3 corner1 = new Vector3(-125, 74, -248);

Vector3 corner2 = new Vector3(-39, 192, -248);

corner1 = Vector3.Transform(corner1, mirrorCamera.View);

corner2 = Vector3.Transform(corner2, mirrorCamera.View);

// Create a projection matrix

Matrix projection = Matrix.CreatePerspectiveOffCenter(corner2.X, corner1.X,

corner1.Y, corner2.Y, zDist, 10000);

top

corner2

Front view (from back)

left

right

bottom

corner1

rendering and using it
Rendering and using it

if (renderTarget == null)

{

renderTarget = new RenderTarget2D(graphics, 512, 512, 1, graphics.DisplayMode.Format);

}

graphics.SetRenderTarget(0, renderTarget);

graphics.Clear(Color.Black);

graphics.RenderState.DepthBufferEnable = true;

graphics.RenderState.DepthBufferWriteEnable = true;

DrawModels(graphics, mirrorCamera, mirrorCamera.View, projection, gameTime);

graphics.SetRenderTarget(0, null);

Texture2D rendering = renderTarget.GetTexture();

foreach (ModelMeshPart part in mirrorMesh.MeshParts)

{

part.Effect.Parameters["Texture"].SetValue(rendering);

part.Effect.CurrentTechnique = part.Effect.Techniques["Textured"];

}

other notes
Other Notes

The image will be as viewed from the back of the mirror. You need to “mirror” it to see it from the front of the mirror.

Just ask your artist to create u,v coordinates that will mirror the texture.

getting really fancy

Any Ideas?

Getting Really Fancy…

Javier Cantón Ferrerohttp://www.codeplex.com/XNACommunity/Wiki/View.aspx?title=Reflection

shadows
Shadows

Shadows are a nice effect, plus, the provide an important depth cue.

Are you sure this chair is sitting on the floor?

shadow map
Shadow Map

Image from light 0 viewpoint

Depth map from light 0 viewpoint

The map tells how far the “lit” point is from the light.

shadow map depth image
Shadow Map Depth Image

Hard to see, but all we are doing is saving the depth for each pixel.

creating a depth map
Creating a depth map
  • We render the scene from the viewpoint of the light into a depth texture.
  • Create buffers to write into.
  • Set up to write to the buffers (writing to a texture).
  • Save off the buffers for later use.
creating a depth texture

private RenderTarget2D shadowRenderTarget ;

private DepthStencilBuffer shadowDepthBuffer;

private Texture2D shadowMap;

Creating a depth texture

SurfaceFormat shadowMapFormat = SurfaceFormat.Unknown;

// Check to see if the device supports a 32 or 16 bit floating point render target

if (GraphicsAdapter.DefaultAdapter.CheckDeviceFormat(DeviceType.Hardware,

GraphicsAdapter.DefaultAdapter.CurrentDisplayMode.Format,

TextureUsage.Linear, QueryUsages.None,

ResourceType.RenderTarget, SurfaceFormat.Single) == true)

{

shadowMapFormat = SurfaceFormat.Single;

}

else if (GraphicsAdapter.DefaultAdapter.CheckDeviceFormat(

DeviceType.Hardware,

GraphicsAdapter.DefaultAdapter.CurrentDisplayMode.Format,

TextureUsage.Linear, QueryUsages.None,

ResourceType.RenderTarget, SurfaceFormat.HalfSingle)

== true)

{

shadowMapFormat = SurfaceFormat.HalfSingle;

}

// Create new floating point render target

shadowRenderTarget = new RenderTarget2D(graphics,

shadowMapWidthHeight,

shadowMapWidthHeight,

1, shadowMapFormat);

// Create depth buffer to use when rendering to the shadow map

shadowDepthBuffer = new DepthStencilBuffer(graphics,

shadowMapWidthHeight,

shadowMapWidthHeight,

DepthFormat.Depth24);

You usually do this in LoadContent or Activate.

creating the shadow map setting up to draw
Creating the Shadow Map: Setting up to draw

/// <summary>

/// Renders the scene to the floating point render target then

/// sets the texture for use when drawing the scene.

/// </summary>

void CreateShadowMap(GraphicsDevice graphics, Camera camera)

{

// We need a view * projection matrix for the light

Matrix lightViewProjection = CreateLightViewProjectionMatrix(camera);

// Set our render target to our floating point render target

graphics.SetRenderTarget(0, shadowRenderTarget);

// Save the current stencil buffer

DepthStencilBuffer oldDepthBuffer = graphics.DepthStencilBuffer;

// Set the graphics device to use the shadow depth stencil buffer

graphics.DepthStencilBuffer = shadowDepthBuffer;

// Clear the render target to white or all 1's

// We set the clear to white since that represents the

// furthest the object could be away

graphics.Clear(Color.White);

graphics.RenderState.DepthBufferEnable = true;

graphics.RenderState.DepthBufferWriteEnable = true;

creating the shadow map drawing
Creating the Shadow Map: Drawing

foreach (ModelMesh mesh in Bedroom.Meshes)

{

foreach (Effect effect in mesh.Effects)

{

effect.CurrentTechnique = effect.Techniques["CreateShadowMap"];

effect.Parameters["LightViewProj"].SetValue(lightViewProjection);

}

}

// Draw any occluders

DrawModel(graphics, camera, Bedroom, Matrix.Identity);

// Set render target back to the back buffer

graphics.SetRenderTarget(0, null);

// Reset the depth buffer

graphics.DepthStencilBuffer = oldDepthBuffer;

// Return the shadow map as a texture

shadowMap = shadowRenderTarget.GetTexture();

Set technique and light matrix.

Obtaining the result.

setting the effect for regular drawing
Setting the effect for regular drawing

foreach (ModelMesh mesh in Bedroom.Meshes)

{

foreach (Effect effect in mesh.Effects)

{

Texture2D texture = effect.Parameters["Texture"].GetValueTexture2D();

if (texture != null)

{

effect.CurrentTechnique = effect.Techniques["Textured"];

}

else

{

effect.CurrentTechnique = effect.Techniques["NoTexture"];

}

effect.Parameters["ShadowMap"].SetValue(shadowMap);

}

}

shader additions
Shader Additions

structShadowVertexShaderOutput

{

float4 Position : POSITION0;

float4 WorldPosition : TEXCOORD1;

};

// Transforms the model into light space an renders out the depth of the object

ShadowVertexShaderOutputCreateShadowMap_VertexShader(float4 Position: POSITION)

{

ShadowVertexShaderOutput Out;

Out.WorldPosition = mul(Position, World);

Out.Position = mul(Out.WorldPosition, LightViewProj);

return Out;

}

// Saves the depth value out to the 32bit floating point texture

float4 CreateShadowMap_PixelShader(ShadowVertexShaderOutput input) : COLOR

{

float4 pos = mul(input.WorldPosition, LightViewProj);

return float4(pos.z / pos.w, 0, 0, 1);

}

// Technique for creating the shadow map

technique CreateShadowMap

{

pass Pass1

{

VertexShader = compile vs_2_0 CreateShadowMap_VertexShader();

PixelShader = compile ps_2_0 CreateShadowMap_PixelShader();

}

}

how do we use this83
How do we use this?

13.4

a

10.0

b

Light

14.5

Point a is shadowed, point b is not.

Point a is 13.4 from the light, but the depth map says the visible point is 10.0 away, so point a is not seen by the light. Point b is 14.5 from the light and the depth buffer says 14.5, so the point is lit.

Eye

the shadow algorithm
The shadow algorithm
  • Determine location of vertex on the shadow map
  • d1 the depth stored in the shadow map at that location
  • d2 the depth the vertex is relative to the light
  • If d1 ≥ d2 the light hits the object
slide85

float4 PixelShaderTexturedFunction(VertexShaderOutput input) : COLOR0

{

float3 normal = input.Normal;

float4 position = input.WorldPosition;

// Ambient lighting hitting the location

float3 color = LightAmbient;

float3 scolor = 0;

// Find the position of this pixel in the light space

float4 lightingPosition = mul(input.WorldPosition, LightViewProj);

// Find position in the shadow map

float2 ShadowTexCoord = 0.5 * lightingPosition.xy / lightingPosition.w + float2(0.5, 0.5);

ShadowTexCoord.y = 1.0 - ShadowTexCoord.y;

// Get the depth stored in the shadow map

float shadowdepth = tex2D(ShadowMapSampler, ShadowTexCoord).r;

// Calculate the pixel dpeth

float ourdepth = (lightingPosition.z / lightingPosition.w) - 0.001f;

if(shadowdepth >= ourdepth)

{

// Compute direction to the light

float3 Light1Direction = normalize(Light1Location - position);

// Add diffuse contribution due to this light

color += max(dot(Light1Direction, normal), 0) * Light1Color;

// Add specular contribution due to this light

float3 V = normalize(Eye - position);

float3 H = normalize(Light1Direction + V);

scolor += pow(saturate(dot(normal, H)), Shininess) * Light1Color;

}

return float4(color * tex2D(Sampler, input.TexCoord) + scolor * SpecularColor, 1);

}

Pixel shader – only works for per-pixel lighting

only piece left
Only piece left…

How do we figure out where to point the camera when taking a picture from the light’s viewpoint?

Any ideas?

we want to enclose the visible camera frustum
We want to enclose the visible camera frustum

Light

There are 8 points in a camera frustum. Be sure all 8 are visible.

Eye

creating the view matrix
Creating the view matrix

Matrix CreateLightViewProjectionMatrix(Camera camera)

{

// Create a bounding frustum for our camera

BoundingFrustum frustum = new BoundingFrustum(camera.View * camera.Projection);

Vector3[] frustumCorners = frustum.GetCorners();

Vector3 frustumCenter = frustumCorners[0];

for (int i = 1; i < frustumCorners.Length; i++)

{

frustumCenter += frustumCorners[i];

}

frustumCenter /= frustumCorners.Length;

Vector3 Light1Location = new Vector3(5, 221, -19);

// Create the view and projection matrix for the light

Matrix lightView = Matrix.CreateLookAt(Light1Location, frustumCenter, new Vector3(0, 1, 0));

This just means the center of the shadow map will be the center of the view frustum.

creating the projection matrix
Creating the projection matrix

float zNear = 50;

// Determine maximum extents in each direction

float left = 0, right = 0, top = 0, bottom = 0;

foreach (Vector3 corner in frustumCorners)

{

// Transform to view coordinate system

Vector3 v = Vector3.Transform(corner, lightView);

// Project to the near clipping plane

v.X = -v.X / v.Z * zNear;

v.Y = -v.Y / v.Z * zNear;

if (v.X < left)

left = v.X;

if (v.X > right)

right = v.X;

if (v.Y < bottom)

bottom = v.Y;

if (v.Y > top)

top = v.Y;

}

Matrix lightProjection = Matrix.CreatePerspectiveOffCenter(left, right, bottom, top, zNear, 1000);

return lightView * lightProjection;

}

This projects the frustum points onto a view plane, then ensures the frustum for the light is big enough.

more touching
More Touching…

How can we determine if we have clicked on something?

Especially, how could we determine if we have clicked on Victoria’s hand?

Think about this one, particularly the second point!

item buffers
Item Buffers

Have your artist create extra meshes with a fixed color. These don’t show normally. But, if you render these to a texture, you can tell what is where.

An image where the pixels tell what the object is (often by color) rather than the color of the image is called an item buffer.

item buffers93
Item Buffers

Rendered image

Item buffer

Create a shader that just sets the pixel to the diffuse color; no lighting or anything. Then the diffuse color tells you what is at the pixel.

optimizations
Optimizations

Often I’m only interested in what’s at one location

Where the mouse is right now.

Render a small image (4x4 for example) only around the mouse coordinates.

doing this
Doing this…

int wid = device.Viewport.Width;

int hit = device.Viewport.Height;

float scaleX = 1 / rangeX;

float scaleY = 1 / rangeY;

// Where is the mouse on the screen?

float rx = ((float)x / (float)wid - 0.5f) * 2;

float ry = -((float)y / (float)hit - 0.5f) * 2;

mapViewport = device.Viewport;

projection = camera.Projection *

Matrix.CreateTranslation(-rx, -ry, 0) *

Matrix.CreateScale(scaleX, scaleY, 1);

spotlights
Spotlights

float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0

{

float3 color = float3(0, 0, 0);

// Compute direction to the light

float3 Light1Direction = normalize(Light1Location - input.PositionWorld);

float3 Light1Pointing = normalize(Light1Location - Light1Target);

if(dot(Light1Pointing, Light1Direction) > 0.98)

{

// Add contribution due to this light

color = max(dot(Light1Direction, input.NormalWorld), 0) * Light1Color;

}

return ((input.Color + float4(color, 1)) * float4(DiffuseColor, 1)) ;

}

0.98 is cos(11.4o). So, our cone will have that spread.

slide98
Fog
  • Two things at play here:
  • The farther the point the dimmer it gets.
  • The farther the point the more we see the fog color.

Distance to point

Fog density constant

Fog color

Computed Color

Result Color

We need a weighted sum of the fog color and the computed color.

slide99
HLSL

float4 fog = 0.992;

float4 fogColor = float4(0.4, 0.5, 0.4, 1);

float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0

{

// Add specular contribution due to this light

float3 V = Eye - position;

float d = length(V);

V /= d;

float3 H = normalize(Light1Direction + V);

scolor += pow(saturate(dot(normal, H)), Shininess) * Light1Color;

float4 thecolor = float4(color * DiffuseColor + scolor * SpecularColor, 1);

float fogW = pow(fog, d);

thecolor = thecolor * fogW + fogColor * (1 - fogW);

return thecolor;

}