Csc 308 graphics programming
This presentation is the property of its rightful owner.
Sponsored Links
1 / 129

CSC 308 – Graphics Programming PowerPoint PPT Presentation

  • Uploaded on
  • Presentation posted in: General

CSC 308 – Graphics Programming. Visual Realism Information modified from Ferguson’s “Computer Graphics via Java”, and “Fundamentals of Computer Graphics.” by Shirley. Dr. Paige H. Meeker Computer Science Presbyterian College, Clinton, SC. Concepts.

Download Presentation

CSC 308 – Graphics Programming

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

Csc 308 graphics programming

CSC 308 – Graphics Programming

Visual Realism

Information modified from Ferguson’s “Computer Graphics via Java”, and

“Fundamentals of Computer Graphics.” by Shirley

Dr. Paige H. Meeker

Computer Science

Presbyterian College, Clinton, SC



  • When we use computers to create real or imagined scenes, we use attributes of our visual realm – shapes of objects are revealed by light, hidden by shadow, and color is used to create a mood. To create these scenes, we use the procedures used in other media, considering composition of our scene, lighting, model surfaces and materials, camera angles, etc.

  • Ironically, we go to a lot of trouble to create a 3D scene that we can only see on a 2D monitor. The process of converting our 3D scene in order to produce a 2D image is called rendering. (word from architecture, where 2D drawings of a design is referred to as a “rendering”) There are three approaches to rendering of scenes

Wireframe rendering

Wireframe Rendering

  • Advantages:

    • Simplest Approach

    • Represents object as if it had no surfaces at all – only composed of “wire-like” edges

    • Easy and fast for computer to calculate

    • A part of all 3D animation systems

    • Allows real-time interaction with the model

  • Disadvantages:

    • They are transparent

    • Can be “ambiguous” – difficult to tell which of the “wires” are the front and which are the back

Perspective confusion

Perspective Confusion

Hidden line rendering

Hidden Line Rendering

  • Advantages

    • Takes into account that an object has surfaces and that these surfaces hide the surfaces behind them

    • Continues to represent the objects as lines, but some lines are hidden by the surfaces in front of them.

  • Disadvantages

    • Computationally more complicated than wireframe rendering

    • Takes longer to render / updates more slowly

    • Recognizes existence of surfaces, but tell you nothing about the character of the surfaces (i.e. no color or material information)

Hidden line removal

Hidden Line Removal

Shaded surface rendering aka rendering

Shaded Surface Rendering (aka Rendering)

  • Advantages

    • Provides information about surface characteristics, lighting, and shading

  • Disadvantages

    • More complicated to compute and even longer to render.

Steps in rendering process

Steps in Rendering Process

Generally, you can think of the process of producing a 2D rendering of a 3D scene as a 6 step process:

Obtaining the geometry of the model

Includes characters, props, and sets

Placing the camera

Also called the “point of view”, we can maneuver our virtual camera in XYZ space in order to view the portion of our scene we are most interested in.

Defining the light sources

Design and place the lights within the scene. – there can be many lights in one scene, and they can have various characteristics (like changes of color)

Defining the surface characteristics

Specify: color, texture, shininess, reflectivity, and transparency

Choosing the shading technique

Related to defining the surface characteristics

Running the rendering algorithm

Then, you may save and output your image.

Steps in rendering process1

Steps in Rendering Process

Hidden line removal aka surface culling introduction

Hidden Line Removal (aka Surface Culling) - Introduction

  • Depth cueing

  • Surfaces

  • Vectors/normals

  • Hidden face culling

  • Convex/concave solids

Hidden line removal1

Hidden Line Removal

Hidden line removal2

Hidden Line Removal

  • No one best algorithm

  • Look at a simple approach for convex solids

  • based upon working out which way a surface is pointing relative to the viewer.

  • To be a convex solid, a line drawn from any point on one surface to a point on the second surface must pass entirely through the interior of the solid.



Based on surfaces not lines

Based on surfaces not lines


  • Need a Surface data structure

  • WireframeSurface

    • made up of lines

Flat surfaces

Flat surfaces

  • Key requirement of our surfaces is that they are FLAT (contained within the same plane).

  • Easiest way to ensure this is by using only three points to define the surface….

    • (any triangle MUST be flat - think about it)

  • …but as long as you promise not to do anything that will bend a flat surface, we can allow them to be defined by as many points as you like.

  • Single sided

Which way does a surface point

Which way does a surface point?

  • Vector mathematics defines the concept of a surface’s normal vector.

  • A surface’s normal vector is simply an arrow that is perpendicular to that surface (i.e. it sticks straight out)

Determining visibility

Determining visibility

Consider the six faces of a cube and their normal vectors

Vectors N1 and N2 are are the normals to surfaces 1 and 2 respectively.

Vector L points from surface 1 to the viewpoint.

It can be seen that surface 1 is visible to the viewer whilst surface 2 cannot be seen from that position.

Determining visibility1

Determining visibility

  • Mathematically, a surface is visible from the position given by L if:

  • Where q is the angle between L and N.

  • Equivalently,

Determining visibility2

Determining visibility

  • Fortunately we can calculate cos q from the direction of L (lx,ly,lz) and N (nx,ny,nz)

  • This is due to the well known result in vector mathematics - the dot product (or the scalar product) whereby:

Determining visibility3

Determining visibility

  • Alternatively:

  • Where L and N are unit vectors (i.e of length 1)

How do we work out l n

How do we work out L.N?

How do we work out l n1

How do we work out L.N?

  • At this point we know:

    • we need to calculate cos q

    • Values for lx,ly,lz

    • The only things we are missing are nx,ny,nz

Calculating the normal vector

Calculating the normal vector

  • If you multiply any two vectors using the vector product, the result is another vector that is perpendicular to the plane (i.e normal) which contained the two original vectors.



  • We need to adopt the convention that the calculated normal vector points away from the observer when the angle between the two initial vectors is measured in an clockwise direction.

  • Failure to do this will lead to MAJOR confusion when you try and implement this

Calculating the normal

Calculating the normal

  • Where to find two vectors that we can multiply?

  • Answer: we can manufacture them artificially from the points that define the plane we want the normal of

Calculating the normal1

Calculating the normal

  • By subtracting the coordinates of consecutive points we can form vectors which a guaranteed to lie in the plane of the surface under consideration.

Calculating the normal2

Calculating the normal

  • We define the vectors to be anti-clockwise, when viewing the surface from the interior

  • (imagine the surface is part of a cube and your looking at it from INSIDE the cube).

  • Following the anticlockwise convention mentioned above we have produced what is known as an outward normal.



  • An important consequence of this is that when you define the points that define a surface in a program, you MUST add them in anti-clockwise order

Calculating the normal3

Calculating the normal

This is a definition of the vector product



  • At this point we know:

    • we need to calculate cos q

    • Values for lx,ly,lz

    • values for nx,ny,nz




then draw the surface


don’t! (or draw dashes)

More complex shapes

More complex shapes

Concave objects

Multiple objects

More complex shapes1

More complex shapes

  • In these cases, each surface must be considered individually. Two different types of approach are possible:

    • Object space algorithms - examine each face in space to determine its visibility

    • Image space algorithms - at each screen pixel position, determine which face element is visible.

  • Approximately, the relative efficiency of an image space algorithm increases with the complexity of the scene being represented, but often the drawing can be simplified for convex objects by removing surfaces which are invisible even for a single object.

Hidden surface removal

Hidden Surface Removal

Algorithms that sort all the points, lines, and surfaces of an object and decide which are visible and which are not. Then, the visible surfaces are kept and the hidden surfaces are removed.

Object space

Object Space

  • Make the calculations in three dimensions.

  • Require intensive computing

  • Generate data useful for rendering textures, shadows, and antialiasing

  • EXAMPLE: Ray Tracing

Image space

Image Space

  • Retain depth information of the objects in the scene

  • Sort from a lateral position

  • Sort only to the resolution of the display device

  • Efficient, but discard some of the original 3D information used for shadowing, texturing, and antialiasing.

Ray casting

Ray Casting

  • From the “eye” (or “camera”), a ray is cast through the first pixel of the screen

  • Eye follows the ray until the ray either hits the surface of an object or exits from the viewable world

  • If the ray hits an object, the program calculates the color of the object at the point where it has been hit. This becomes the color of the pixel through which the ray had been cast.

  • This repeats through all the pixels of the image

Ray casting1

Ray Casting

  • How do we know what object to render, if there is more than one object in the path of the ray?

Common algorithms

Common Algorithms

  • Painter’s Algorithm

  • Z-Buffer Algorithm

Painter s algorithm

Painter’s Algorithm

  • Sort all objects by depth

  • Start rendering those objects furthest away

  • Each new object covers up distant objects that have already been rendered

  • Like a painter who creates the background before painting an object in the foreground

  • Time consuming! (Esp. for large numbers of objects)

Painter s algorithm1

Painter’s Algorithm

The distant mountains are painted first, followed by the closer meadows; finally, the closest objects in this scene, the trees, are painted.

Image from

Problems with painter s

Problems with Painter’s

  • Overlapping polygons can cause the algorithm to fail.

  • Led to the development of Z-buffer techniques

Image from

Z buffering


  • aka Depth Buffering

  • Makes use of a block of memory that stores distance information from the object’s surface to the “eye” Large numbers are stored far away; smaller numbers are closer

  • Renders objects in any order without presorting them

  • When a ray hits an object, the depth in Z is calculated and stored in the Z-buffer at the Z-buffer pixel corresponding to the pixel the ray was cast.

  • When the ray hits a second object, again, the depth is calculated and compared with the previously stored value. It is is less (closer) than the stored value, the new value overwrites the old.value

  • Usually done in hardware; sometimes in software


The z buffer algorithm

The z-buffer algorithm

based upon sorting the surfaces by their z-coordinates. The algorithm can be summarised thus:

  • Sort the surfaces into order of increasing depth. Define the maximum z value of the surface and the z-extent.

  • resolve any depth ambiguities

  • draw all the surfaces starting with the largest z-value



  • Ambiguities arise when the z-extents of two surfaces overlap.

Ambiguities front view

Ambiguities – front view

Resolving ambiguities

Resolving Ambiguities

  • An algorithm exists for ambiguity resolution

  • Where two shapes P and Q have overlapping z-extents, perform the following 5 tests (in sequence of increasing complexity).

  • If any test fails, draw P first.

X extents overlap

x - extents overlap?

Y extents overlap

y -extents overlap?

Is q not completely on the side of p nearest the viewer

Is Q not completely on the side of P nearest the viewer?

Is p not completely on the side of q further from the viewer

Is P not completely on the side of Q further from the viewer?

Does the projection of the two surfaces overlap

Does the projection of the two surfaces overlap?

If all tests are passed

If all tests are passed

  • … then reverse P and Q in the list of surfaces sorted by Zmax

  • set a flag to say that the test has been perfomed once.

  • The flag is necessary for the case of intersecting planes.

  • If the tests are all passed a second time, then it is necessary to split the surfaces and repeat the algorithm on the 4 surfaces

End up drawing q2 p1 p2 q1

End up drawing Q2,P1,P2,Q1

Ray casting techniques problems

Ray Casting Techniques – Problems

Problems – all raycasting algorithms deal with the shading of each object in the scene as if it existed in isolation; as if there were no other objects in the scene. Thus, the more distant objects have no effect on the object being rendered.

  • Creating reflections (involves other objects in the scene)

  • Casting shadows (objects casting shadows on each other)

  • Transparency (see other objects through the first)

Fixing ray casting problems

Fixing Ray Casting Problems…

  • Ray Tracing

  • Radiosity

Ray tracing

Ray Tracing

Deals with all objects in a scene simultaneously. – when considering a point on a surface, the color at that point is being affected not only by the light from a light source onto that surface, but also by light bouncing onto it from nearby surfaces, or even, if the object is transparent, by light traveling through it or other objects in the scene. Basically, every point is affected not by one light ray, but by many light rays hitting it from different parts of the scene. To obtain accurate renderings, you must trace all of these to their source to see what contributions the source makes to the final color of the surface point.

Ray tracing example

Ray Tracing Example

Ray tracing1

Ray Tracing

  • Ray tracing is complex and powerful – requires calculation of many surfaces to arrive at the final calculation for a single surface point.

  • Takes longer rendering time, but the result is worth it.

    • Effects of all surfaces show up in rendering:

      • Shadows

      • Reflections

      • Transparency

Ray tracing setting limits

Ray Tracing – Setting Limits

  • In a complex scene with many objects, can become immensely complex – continuous bouncing is impractical. Define a maximum number of bounces. This is called the depth of raytracing.

  • Some software also allow selective raytracing – allow you to render specific models with an extensive raytracing algorithm while using less computationally expensive raycasting for the rest.

  • (More later…)

Summary so far

Summary so far…

  • Need for depth cues and hidden line removal

  • Using surfaces rather than lines

  • Calculating which way a surface is facing (surface normal)

  • Base a decision on visibility on normal vector and observer vector

What can we do now

What can we do now?

We can… (Add to project as a bonus – not required!)

  • Write a program to display a cube as a wireframe drawing / adapt it to implement the hidden line algorithm so that only faces that point toward the viewer are drawn

  • Adapt this to display the faces of the cube in areas of solid color (draw as filled polygons)

  • Display a group of cubes, allowing the group to be rotated – implement Z-buffer or Painter’s Algorithm

Shading and surface characteristics

Shading and Surface Characteristics

  • Surface Characteristics – color and shininess of the objects in the scene. When treated as a conglomerate, the grouping of characteristics of a particular surface called a shader or material. A shader includes the definition of the color for that surface, the definition of the shininess of that surface, etc.


    • Can save shader independently of surface, so it can be reused

    • Can create entire libraries of shaders to describe various surfaces.

Defining surface properties

Defining Surface Properties

  • Color (RGB, HSL, etc)

  • Reflectivity (How does light bounce off the surface?)

    • Diffuseness

    • Highlights

    • Transparency

    • Reflectivity

    • Incandescence



  • measure of how much light reflects, overall, from a surface.

    • small diffuse – darker because it reflects less light (0 – surface renders as black, because no light reflects from it)

    • large diffuse – lighter because it reflects more light.

  • Does not produce highlights – diffuse means equally in all directions. Highlights are produced when a lot of light is reflecting in a specific direction.

  • Only diffuse – surfaces look perfectly matte (like cardboard)

Highlights aka specularity

Highlights aka Specularity

  • More specular, more highlight. – metallic, shiny plastic, soap bubbles

  • Less specular, less highlight – unfinished wood, paper, cloth

  • Defined in conjunction with diffuseness – first supply a diffuse value to control the overall brightness of the surface, then a specular value to control the brightness of the highlights.

Size of highlight

Size of Highlight

  • Many systems also allow control over the size of the highlight. Why?

    • Different materials produce highlights of different sizes

    • Highlight on shiny chrome – very bright and condensed (small and focused)

    • Highlight on extremely shiny plastic – bright, broader and less focused

  • Using the highlight size parameter and the specularity parameter, you can define surfaces that closely resemble specific materials.

Highlight color

Highlight Color

  • Different materials have different highlight colors.

    • Plastic of any color tends to reflect white highlights

    • Aluminum – tends to reflect a highlight of the same color as the aluminum



The above three parameters define the shininess of the surface; however, shiny surfaces tend to be reflective. Not all shiny surfaces are equally reflective; example, shiny plastic is less reflective than shiny stainless steel, which is why we can tell them apart. Some systems allow you to control this with the reflectivity parameter – 0, surface doesn’t reflect, 1, surface is a perfectly reflective mirror.



  • A surface is either opaque or transparent. Most software give you an option to control this characteristic (0 – no transparency, 1 – invisible!! [watch out for that one…])

  • Color of surface is determined by combining the color of the transparent object with the color of whatever is behind it.


  • Nothing behind transparent object with black background – combine color of object with black, producing a rendering of the object that looks darker, not transparent

  • Decrease in the effect of highlights – highlight and normal surface color blends with whatever is behind the object. – sometimes solved by dramatically increasing specular component.





  • (Ray Tracing to Render Appropriately)

  • Physicists have determined experimentally the refractive index values for materials in the real world. (Straw in water)

    • Air – index=1

  • Refractive indices other than one cause light to bend and therefore create distortions as you look through the material in question. Indices larger than one bend light one way and smaller bend light the other.



  • Some surfaces emit light instead of reflecting it. (light bulbs) Larger the incandescence value, the more the surface appears to “glow”

Csc 308 graphics programming


  • Where does that leave us?

  • What can we do?

Improving the visual realism of our system

Improving the visual realism of our system

  • Improving the visual realism of our scenes by accurate “coloring-in”

  • Shading algorithms for determining color are based on the properties of light - a bit of physics

    • Lambert shading (aka Faceted Shading)

    • Gouraud shading

    • Phong shading

Wireframe shaded polygon

Wireframe -> shaded polygon

How do we calculate what shade to fill our polygon

How do we calculate what shade to fill our polygon?

  • Now, we need to determine how all of this information is used by the software to render the darks and lights (the shading) of our scenes. We need a shading algorithm or shading model (well-thought out, logical procedure).

  • All shading algorithms represent simplifications of what happens in the real world. We couldn’t possibly take into account all the complexities, so any algorithm must make assumptions that allow it to manage the otherwise overwhelming complexity of the process. The closer the assumptions are to the real-world situation, the more natural the renderings look; however, this means longer rendering times.

What factors influence shade

What factors influence shade?

  • Color and strength of incoming illumination

  • Color and texture (rough/smooth) of surface

  • Relative positions and orientations of surface, light source and observer

Simplifying assumptions

Simplifying Assumptions

  • Consider Intensity - forget color for the moment (we’ll return to that later)

    • Consider white light (intensity of all colour equal)

    • gives monochrome (black and white) picture

Simplifying assumptions1

Simplifying Assumptions

  • Assume light source is infinitely distant

  • Parallel incoming illumination

Assume light source is infinitely distant

Assume light source is infinitely distant

  • No change in light intensity across the scene

  • Because there is little difference between the intensity of light on one side of an object than another, we only have to do one calculation of intensity for the whole surface

  • NOTE: This is not true for close illumination

Faceted shading lambert shading

Faceted Shading / Lambert Shading

  • Any flat surface is the same color at every point on the surface. (Normally not true in the physical world, where a flat surface may be darker at one end than the other.) Rendering this way is fast, but gives objects a faceted look.

  • However, the concept introduces a concept important to all shading algorithms. To calculate the color of a given surface, any shading algorithm must know whether the surface it is about to render faces or does not face the light because the surface will be lighter or darker depending on its orientation. To measure direction, programs measure the surface normals – a vector that is perpendicular to a surface at a given point on that surface. Usually represented as an arrow extending from the surface. For a flat surface, one surface normal suffices to indicate the orientation of the entire surface, since all points on a given flat surface face the same direction.

  • Using surface normals, rendering programs can compute the exact angle at which a surface is oriented toward the light.

Faceted shading lambert shading1

Faceted Shading / Lambert Shading

  • Curved surfaces add complications; instead of one normal, many are required to determine orientation, because each point can face a different direction. Infinitely many different surface normals. Because of the added complications, many software packages convert curved surfaces to polygonal approximations just before rendering, to simplify calculation.

  • Part of this process, triangulation subdivides curved surfaces into triangular polygons. Why? Triangles must be flat (other sided polygons don’t have to be)

Implementing components of illumination

Implementing:Components of illumination

  • Light reaching the eye has to come from some source bouncing off the surface of the object. There are three components to consider…

Components of illumination

Components of illumination

  • that from diffuse illumination (incident rays come from all over, not just one direction) - Ed

  • that from a point source which is scattered diffusely from the surface - Esd

  • that from a point source which is specularly reflected. - Ess


E = Ed + Esd + Ess

Combining the illumination

Combining the illumination

  • Combining all three components (diffuse illumination, diffuse reflection from a point source and specular reflection from a point source) gives us the following expression:

    E = Ed + Esd + Ess

Diffuse illumination

Diffuse illumination

  • light that comes from all directions not from one particular source.

    • Thinks about the light of a grey cloudy day as compared to a bright sunny one: cloudy day, - no shadows, light from the sun scattered by the clouds and seems to come equally from all directions

Diffuse illumination1

Diffuse illumination

  • proportion of light reaching surface is reflected back to the observer.

  • proportion is dependant on the properties (colour) of the surface and

  • HAS NO DEPENDANCE on the angle of the viewer (that’s why its diffuse!).

Diffuse illumination2

Diffuse illumination

Id - incident illumination

Ed - observed intensity

Ed = R.Id


  • R is the reflection coefficient of the surface (0 <= R <=1) R is the proportion of the light that is reflected back out

Diffuse illumination3

Diffuse illumination

Diffuse illumination alone does not give visual realism. With no angular dependence in the light, a viewer will not be able to see any difference between a sphere and a disk.

Diffuse scattering from a point source

Diffuse scattering from a point source

  • When a light ray strikes a surface it is scattered diffusely (i.e. in all directions)

  • Doesn’t change with the angle the observer is looking from

Diffuse scattering from a point source1

Diffuse scattering from a point source

  • The intensity of the reflected rays is:

    Esd = R.cos(i).Is

    i - angle of incidence – the angle between the surface normal and the ray of light - 0 <= i <= 90

    Esd -intensity of the scattered diffuse rays

    Is - intensity of the incident light ray.

    R - reflection coefficient of the surface (0 <= R <=1)

Diffuse scattering from a point source2

Diffuse scattering from a point source

  • Esd - Doesn’t change with the angle the observer is looking from.

  • i.e. - It doesn’t matter where you are looking at the surface from – that’s what diffuse means.

Specular reflection

Specular reflection

  • relationship between a ray from point source to the reflected ray is given by Lambert’s Law:

i = r

i - angle of incidence

r - angle of reflection

Specular reflection1

Specular reflection

  • For a perfect reflector, all the incident light would be reflected back out in the direction of S. In fact, when the light strikes a surface it is scattered diffusely (i.e. in all directions):

  • For an observer viewing as an angle (s) to the reflection direction S, some fraction of the reflected light is still visible (due to the fact that the surface isn’t a perfect reflector - some degree of diffusion takes place).

  • How much?

Specular reflection2

Specular reflection

  • The proportion of light visible is a function of

    • the angle s (in fact it is proportional to cos(s)

    • the quality of the surface

    • angle of incidence i.

  • We can define a coefficient w(i) the specular reflection coefficient - which is a function of the material of which the surface is made and of i . Each surface will have its own w(i).

Specular reflection coefficient

Specular reflection coefficient

Specular reflection3

Specular reflection

Ess = w(i).cosn(s).Is

Needs some explanation…

Specular reflection4

Specular reflection

  • Ess is the intensity of the light ray in the direction of O

  • nis a fudge factor: n=1 - rough surface (paper) n=10 smooth surface (glass)

  • w(i) is usually never calculated - simply choose a constant (0.5?). It is actually derived from the physical properties of the material from which the surface is made.

Specular reflection5

Specular reflection

cosn(s) - This is in fact a fudge which has no basis in physics, but works to produce reasonable results. By raising cos(s) to the power n, what we do is control how much the reflected ray spreads out as it leaves the surface.

Combining the illumination1

Combining the illumination

  • Combining all three components (diffuse illumination, diffuse reflection from a point source and specular reflection from a point source) gives us the following expression:

    E = Ed + Esd + Ess

Combining the illumination2

Combining the illumination

E = R.Id + (R.cos(i) + w(i) + cosn(s)).Is

  • E is the total intensity of light seen reflected from the surface by the observer.

Now let s go over what we know

Now, let’s go over what we know

  • E – We’re trying to calculate E , so obviously that is unknown.

  • R - is defined for each surface, so we need to add it as a variable to our surface class and define it when creating the surface, so its known.

  • Id - The incident diffuse light - we can define this to be anything we like; 0 = darkness, for an 8-bit greyscale 255 = white. – Known

  • cos(i) - we can work this out from L.N

  • w(i) - we can define this to be anything between 0 and 1 - trial and error called for! - Known

Calculating e

Calculating E

  • n - is defined for each surface, so we need to add it as a variable to our surface class and define it when creating the surface, so, basically its known.

  • Is - the intensity of the incident point light source - again we can define this to be anything we like; 0 = darkness, for an 8-bit greyscale 255 = white. (Coming… a discussion of adding lights to our data model.) - Known.

  • cos(s) ? Ah!

Calculating cos s

Calculating cos s

  • cos s = S.O

  • We know O

  • Need S

Calculating cos s1

Calculating cos s

  • Thanks to Lambert’s law we know that S is the mirror of the incident ray about the surface normal. It can be found from some vector maths:

  • S = 2Q - L

  • Lambert shading

    Lambert Shading

    • Finally, we know all of the terms in the combined illumination equation and for any surface in our model we can calculate the appropriate shade.

      E = Rid + (R.cos(i) + w(i) + cosn(s)).Is

    • A program which implements this model of shading is said to implement Lambert Shading

    Extending to color

    Extending to color

    • Each surface has a color which means it reflects different colours by different amounts, i.e. it has a different value of R for red, blue and green

      Ered = Edred + Esdred + Essred

      Egreen = Edgreen + Esdgreen + Essgreen

      Eblue = Edblue + Esdblue + Essblue

    Summary so far1

    Summary so far…

    • Illumination has 3 components……

      • Diffuse

      • Diffuse from point source

      • Specular

    • …. Which can be summed

    • Calculated for each polygon…

    • Lambert Shading

    Problems with lambert

    Problems with Lambert

    • Using a different colour for each polygon means that the polygons show up very clearly (the appearance of the model is said to be facetted.

    Mach bands

    Mach bands

    Mach bands1

    Mach bands

    • This is a physiological effect whereby the contrast between two areas of a different shade is enhanced as the border between the shades

    Smooth shading

    Smooth Shading

    • What is required is some means of smoothing the sudden transition in color.

    • Various algorithms exist for this; amongst the best known are:

      • Gouraud shading and

      • Phong shading

    • both named after their inventors.

    Gouraud shading

    Gouraud Shading

    • The facetted appearance of a Lambert shaded model is due to each polygon having only a single color.

    • To avoid this effect, it is necessary to vary the color across a polygon:

    Gouraud shading1

    Gouraud Shading

    • Gouraud shading makes smoother surfaces by artificially adjusting the surface normals of the polygons. First, position a normal at each vertex (corner) of each polygon. This normal is perpendicular to the surface of it’s own polygon. At each polygon where two poly’s touch, two surface normals (one for each poly) are created. Each is perpendicular to the corresponding surface. Then, algorithm averages these normals at a given vertex. This new, averaged normal is what the algorithm uses. Then, color at each vertex is calculated and interpolated across the surface, from one corner to another.

    • Produces smooth progression along the surface, but you can still see the underlying polygonal structure at the edges of the surface.

    Gouraud shading2

    Gouraud Shading


    • If light hitting surfaces bounces equally in all directions, diffuse surface. 100% = matte surface.

    • If light bounces off only in one direction, we can see a highlight and the surface is specular (mirror-like)

    • Gouraud shading does handle highlights.


      • interpolating one vertex color to another sometimes makes seams between polygons visible (esp. near bright light)

    To implement

    To Implement:

    • Color must be calculated for each pixel of the polygon.

    • The method we use to calculate the color results in the neighbouring pixels across the border between two polygons ending up with approximately the same colors.

    • This blends the shades of the two polygons and avoids the sudden discontinuity at the border.

    Gouraud shading3

    Gouraud shading

    • based upon calculating a vertex normal

    • an artificial construct (a true normal cannot exist for a point such as a vertex).

    • can be thought of as the average of the normals of all the polygons that share that vertex

    Gouraud shading4

    Gouraud shading

    Gouraud shading5

    Gouraud shading

    • Having found the vertex normals for each vertex of the polygon we want to shade, we can calculate the color at each vertex using the same formula that we did for Lambert Shading

    Calculating the color of each pixel

    Calculating the color of each pixel

    • Interpolating “scan-line algorithm”

    • Light intensity at P given by:

    Phong shading

    Phong Shading

    • Like Gouraud, Phong shading averages surface normals at each vertex, but also computes interpolated normals between vertices. From those, it then calculates the colors of the surface. This yields a more accurate calculation of the colors, a more accurate rendering of the highlight, and invisible seams along the edges of the polygons. SLOWER than Gouraud

    Phong shading1

    Phong Shading


    • Assumes that the surface at any given point is perfectly smooth; this means light bouncing off that point is very concentrated, which makes highlights drop off abruptly (good for glossy metallics or plastics). Some materials (aluminum) have softer highlights because they are not perfectly smooth, meaning light bounces off in a more diffused way.

    • If an object’s surface has sufficiently large microscopic irregularities, light hitting the surface bounces among them before leaving the surface, slightly diffused, toward our eye.

    Phong shading2

    Phong Shading

    • Phong shading is based on interpolating the surface normal vector

    • The arrows (and thus the interpolated vectors) give an indication of the curvature of the smooth surface which the flat polygon is approximating to.

    Phong shading3

    Phong Shading

    • The interpolation is (like Gouraud shading) based upon calculating the vertex normals (red arrows)….

    • …using these as the basis for interpolatation along the polygon edges (blue arrows) ….

    • …..and then using these as the basis for interpolating along a scan line to produce the internal normals (green vectors)

    • a color value calculated for each pixel based on the interpolated value of the normal vector.

    Gouraud vs phong

    Gouraud vs Phong

    • Phong shading is requires more calculations, but produces better results for specular reflection than Goraud shading in the form of more realistic highlights.

    Csc 308 graphics programming


    • Consider the specular reflection term

      Cosn s

      If n is large (the surface is a good smooth reflector) and one vertex has a very small value of s (it is reflecting the light ray in the direction of the observer) whilst the rest of the vertices have large values of s - a highlight occurs somewhere on our polygon.

    Csc 308 graphics programming


    • With Gouraud shading, nowhere on the polygon can have a brighter color (i.e higher value) than a vertex

    • unless the highlight occurs on or near a vertex, it will be missed out altogether.

    • When it is near a vertex, its effect is spread over the whole polygon.

    • With Phong shading however, an internal point may indeed have a higher value than a vertex. and the highlight will occur tightly focused in the (approximately) correct position

    Csc 308 graphics programming


    • Lambert shading leads to a facetted appearance

    • To get round this, use a smooth shading algorithm

    • Gouraud and Phong shading produce good effects but at the cost of more calculations.

    • Gouraud interpolates the calculated vertex colors

    • Phong interpolates the calculated vertex normals

    • Phong – slower but better highlights

  • Login