Loading in 5 sec....

CSC 308 – Graphics ProgrammingPowerPoint Presentation

CSC 308 – Graphics Programming

- By
**rod** - Follow User

- 66 Views
- Uploaded on

Download Presentation
## PowerPoint Slideshow about 'CSC 308 Graphics Programming' - rod

**An Image/Link below is provided (as is) to download presentation**
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

Presentation Transcript

### CSC 308 – Graphics Programming

### Shading and Surface Characteristics viewer?

Visual Realism

Information modified from Ferguson’s “Computer Graphics via Java”, and

“Fundamentals of Computer Graphics.” by Shirley

Dr. Paige H. Meeker

Computer Science

Presbyterian College, Clinton, SC

Concepts

- When we use computers to create real or imagined scenes, we use attributes of our visual realm – shapes of objects are revealed by light, hidden by shadow, and color is used to create a mood. To create these scenes, we use the procedures used in other media, considering composition of our scene, lighting, model surfaces and materials, camera angles, etc.
- Ironically, we go to a lot of trouble to create a 3D scene that we can only see on a 2D monitor. The process of converting our 3D scene in order to produce a 2D image is called rendering. (word from architecture, where 2D drawings of a design is referred to as a “rendering”) There are three approaches to rendering of scenes

Wireframe Rendering

- Advantages:
- Simplest Approach
- Represents object as if it had no surfaces at all – only composed of “wire-like” edges
- Easy and fast for computer to calculate
- A part of all 3D animation systems
- Allows real-time interaction with the model

- Disadvantages:
- They are transparent
- Can be “ambiguous” – difficult to tell which of the “wires” are the front and which are the back

Hidden Line Rendering

- Advantages
- Takes into account that an object has surfaces and that these surfaces hide the surfaces behind them
- Continues to represent the objects as lines, but some lines are hidden by the surfaces in front of them.

- Disadvantages
- Computationally more complicated than wireframe rendering
- Takes longer to render / updates more slowly
- Recognizes existence of surfaces, but tell you nothing about the character of the surfaces (i.e. no color or material information)

Shaded Surface Rendering (aka Rendering)

- Advantages
- Provides information about surface characteristics, lighting, and shading

- Disadvantages
- More complicated to compute and even longer to render.

Steps in Rendering Process

Generally, you can think of the process of producing a 2D rendering of a 3D scene as a 6 step process:

Obtaining the geometry of the model

Includes characters, props, and sets

Placing the camera

Also called the “point of view”, we can maneuver our virtual camera in XYZ space in order to view the portion of our scene we are most interested in.

Defining the light sources

Design and place the lights within the scene. – there can be many lights in one scene, and they can have various characteristics (like changes of color)

Defining the surface characteristics

Specify: color, texture, shininess, reflectivity, and transparency

Choosing the shading technique

Related to defining the surface characteristics

Running the rendering algorithm

Then, you may save and output your image.

Hidden Line Removal (aka Surface Culling) - Introduction

- Depth cueing
- Surfaces
- Vectors/normals
- Hidden face culling
- Convex/concave solids

Hidden Line Removal

- No one best algorithm
- Look at a simple approach for convex solids
- based upon working out which way a surface is pointing relative to the viewer.
- To be a convex solid, a line drawn from any point on one surface to a point on the second surface must pass entirely through the interior of the solid.

convex

concave

Based on surfaces not lines

TO IMPLEMENT:

- Need a Surface data structure
- WireframeSurface
- made up of lines

Flat surfaces

- Key requirement of our surfaces is that they are FLAT (contained within the same plane).
- Easiest way to ensure this is by using only three points to define the surface….
- (any triangle MUST be flat - think about it)

- …but as long as you promise not to do anything that will bend a flat surface, we can allow them to be defined by as many points as you like.
- Single sided

Which way does a surface point?

- Vector mathematics defines the concept of a surface’s normal vector.
- A surface’s normal vector is simply an arrow that is perpendicular to that surface (i.e. it sticks straight out)

Determining visibility

Consider the six faces of a cube and their normal vectors

Vectors N1 and N2 are are the normals to surfaces 1 and 2 respectively.

Vector L points from surface 1 to the viewpoint.

It can be seen that surface 1 is visible to the viewer whilst surface 2 cannot be seen from that position.

Determining visibility

- Mathematically, a surface is visible from the position given by L if:
- Where q is the angle between L and N.
- Equivalently,

Determining visibility

- Fortunately we can calculate cos q from the direction of L (lx,ly,lz) and N (nx,ny,nz)
- This is due to the well known result in vector mathematics - the dot product (or the scalar product) whereby:

Determining visibility

- Alternatively:
- Where L and N are unit vectors (i.e of length 1)

How do we work out L.N?

- At this point we know:
- we need to calculate cos q
- Values for lx,ly,lz
- The only things we are missing are nx,ny,nz

Calculating the normal vector

- If you multiply any two vectors using the vector product, the result is another vector that is perpendicular to the plane (i.e normal) which contained the two original vectors.

IMPORTANT

- We need to adopt the convention that the calculated normal vector points away from the observer when the angle between the two initial vectors is measured in an clockwise direction.
- Failure to do this will lead to MAJOR confusion when you try and implement this

Calculating the normal

- Where to find two vectors that we can multiply?
- Answer: we can manufacture them artificially from the points that define the plane we want the normal of

Calculating the normal

- By subtracting the coordinates of consecutive points we can form vectors which a guaranteed to lie in the plane of the surface under consideration.

Calculating the normal

- We define the vectors to be anti-clockwise, when viewing the surface from the interior
- (imagine the surface is part of a cube and your looking at it from INSIDE the cube).
- Following the anticlockwise convention mentioned above we have produced what is known as an outward normal.

IMPORTANT

- An important consequence of this is that when you define the points that define a surface in a program, you MUST add them in anti-clockwise order

Calculating the normal

This is a definition of the vector product

Visibility

- At this point we know:
- we need to calculate cos q
- Values for lx,ly,lz
- values for nx,ny,nz

More complex shapes

- In these cases, each surface must be considered individually. Two different types of approach are possible:
- Object space algorithms - examine each face in space to determine its visibility
- Image space algorithms - at each screen pixel position, determine which face element is visible.

- Approximately, the relative efficiency of an image space algorithm increases with the complexity of the scene being represented, but often the drawing can be simplified for convex objects by removing surfaces which are invisible even for a single object.

Hidden Surface Removal

Algorithms that sort all the points, lines, and surfaces of an object and decide which are visible and which are not. Then, the visible surfaces are kept and the hidden surfaces are removed.

Object Space

- Make the calculations in three dimensions.
- Require intensive computing
- Generate data useful for rendering textures, shadows, and antialiasing
- EXAMPLE: Ray Tracing

Image Space

- Retain depth information of the objects in the scene
- Sort from a lateral position
- Sort only to the resolution of the display device
- Efficient, but discard some of the original 3D information used for shadowing, texturing, and antialiasing.

Ray Casting

- From the “eye” (or “camera”), a ray is cast through the first pixel of the screen
- Eye follows the ray until the ray either hits the surface of an object or exits from the viewable world
- If the ray hits an object, the program calculates the color of the object at the point where it has been hit. This becomes the color of the pixel through which the ray had been cast.
- This repeats through all the pixels of the image

Ray Casting

- How do we know what object to render, if there is more than one object in the path of the ray?

Common Algorithms

- Painter’s Algorithm
- Z-Buffer Algorithm

Painter’s Algorithm

- Sort all objects by depth
- Start rendering those objects furthest away
- Each new object covers up distant objects that have already been rendered
- Like a painter who creates the background before painting an object in the foreground
- Time consuming! (Esp. for large numbers of objects)

Painter’s Algorithm

The distant mountains are painted first, followed by the closer meadows; finally, the closest objects in this scene, the trees, are painted.

Image from Wikipedia.com

Problems with Painter’s

- Overlapping polygons can cause the algorithm to fail.
- Led to the development of Z-buffer techniques

Image from Wikipedia.com

Z-Buffering

- aka Depth Buffering
- Makes use of a block of memory that stores distance information from the object’s surface to the “eye” Large numbers are stored far away; smaller numbers are closer
- Renders objects in any order without presorting them
- When a ray hits an object, the depth in Z is calculated and stored in the Z-buffer at the Z-buffer pixel corresponding to the pixel the ray was cast.
- When the ray hits a second object, again, the depth is calculated and compared with the previously stored value. It is is less (closer) than the stored value, the new value overwrites the old.value
- Usually done in hardware; sometimes in software
- http://en.wikipedia.org/wiki/Z-buffering

The z-buffer algorithm

based upon sorting the surfaces by their z-coordinates. The algorithm can be summarised thus:

- Sort the surfaces into order of increasing depth. Define the maximum z value of the surface and the z-extent.
- resolve any depth ambiguities
- draw all the surfaces starting with the largest z-value

Ambiguities

- Ambiguities arise when the z-extents of two surfaces overlap.

Resolving Ambiguities

- An algorithm exists for ambiguity resolution
- Where two shapes P and Q have overlapping z-extents, perform the following 5 tests (in sequence of increasing complexity).
- If any test fails, draw P first.

If all tests are passed viewer?

- … then reverse P and Q in the list of surfaces sorted by Zmax
- set a flag to say that the test has been perfomed once.
- The flag is necessary for the case of intersecting planes.
- If the tests are all passed a second time, then it is necessary to split the surfaces and repeat the algorithm on the 4 surfaces

End up drawing Q2,P1,P2,Q1 viewer?

Ray Casting Techniques – viewer?Problems

Problems – all raycasting algorithms deal with the shading of each object in the scene as if it existed in isolation; as if there were no other objects in the scene. Thus, the more distant objects have no effect on the object being rendered.

- Creating reflections (involves other objects in the scene)
- Casting shadows (objects casting shadows on each other)
- Transparency (see other objects through the first)

Fixing Ray Casting Problems… viewer?

- Ray Tracing
- Radiosity

Ray Tracing viewer?

Deals with all objects in a scene simultaneously. – when considering a point on a surface, the color at that point is being affected not only by the light from a light source onto that surface, but also by light bouncing onto it from nearby surfaces, or even, if the object is transparent, by light traveling through it or other objects in the scene. Basically, every point is affected not by one light ray, but by many light rays hitting it from different parts of the scene. To obtain accurate renderings, you must trace all of these to their source to see what contributions the source makes to the final color of the surface point.

Ray Tracing Example viewer?

Ray Tracing viewer?

- Ray tracing is complex and powerful – requires calculation of many surfaces to arrive at the final calculation for a single surface point.
- Takes longer rendering time, but the result is worth it.
- Effects of all surfaces show up in rendering:
- Shadows
- Reflections
- Transparency

- Effects of all surfaces show up in rendering:

Ray Tracing – Setting Limits viewer?

- In a complex scene with many objects, can become immensely complex – continuous bouncing is impractical. Define a maximum number of bounces. This is called the depth of raytracing.
- Some software also allow selective raytracing – allow you to render specific models with an extensive raytracing algorithm while using less computationally expensive raycasting for the rest.
- (More later…)

Summary so far… viewer?

- Need for depth cues and hidden line removal
- Using surfaces rather than lines
- Calculating which way a surface is facing (surface normal)
- Base a decision on visibility on normal vector and observer vector

What can we do now? viewer?

We can… (Add to project as a bonus – not required!)

- Write a program to display a cube as a wireframe drawing / adapt it to implement the hidden line algorithm so that only faces that point toward the viewer are drawn
- Adapt this to display the faces of the cube in areas of solid color (draw as filled polygons)
- Display a group of cubes, allowing the group to be rotated – implement Z-buffer or Painter’s Algorithm

- Surface Characteristics – color and shininess of the objects in the scene. When treated as a conglomerate, the grouping of characteristics of a particular surface called a shader or material. A shader includes the definition of the color for that surface, the definition of the shininess of that surface, etc.
- BENEFIT:
- Can save shader independently of surface, so it can be reused
- Can create entire libraries of shaders to describe various surfaces.

Defining Surface Properties viewer?

- Color (RGB, HSL, etc)
- Reflectivity (How does light bounce off the surface?)
- Diffuseness
- Highlights
- Transparency
- Reflectivity
- Incandescence

Diffuseness viewer?

- measure of how much light reflects, overall, from a surface.
- small diffuse – darker because it reflects less light (0 – surface renders as black, because no light reflects from it)
- large diffuse – lighter because it reflects more light.

- Does not produce highlights – diffuse means equally in all directions. Highlights are produced when a lot of light is reflecting in a specific direction.
- Only diffuse – surfaces look perfectly matte (like cardboard)

Highlights viewer?aka Specularity

- More specular, more highlight. – metallic, shiny plastic, soap bubbles
- Less specular, less highlight – unfinished wood, paper, cloth
- Defined in conjunction with diffuseness – first supply a diffuse value to control the overall brightness of the surface, then a specular value to control the brightness of the highlights.

Size of Highlight viewer?

- Many systems also allow control over the size of the highlight. Why?
- Different materials produce highlights of different sizes
- Highlight on shiny chrome – very bright and condensed (small and focused)
- Highlight on extremely shiny plastic – bright, broader and less focused

- Using the highlight size parameter and the specularity parameter, you can define surfaces that closely resemble specific materials.

Highlight Color viewer?

- Different materials have different highlight colors.
- Plastic of any color tends to reflect white highlights
- Aluminum – tends to reflect a highlight of the same color as the aluminum

Reflectivity viewer?

The above three parameters define the shininess of the surface; however, shiny surfaces tend to be reflective. Not all shiny surfaces are equally reflective; example, shiny plastic is less reflective than shiny stainless steel, which is why we can tell them apart. Some systems allow you to control this with the reflectivity parameter – 0, surface doesn’t reflect, 1, surface is a perfectly reflective mirror.

Transparency viewer?

- A surface is either opaque or transparent. Most software give you an option to control this characteristic (0 – no transparency, 1 – invisible!! [watch out for that one…])
- Color of surface is determined by combining the color of the transparent object with the color of whatever is behind it.
PITFALLS:

- Nothing behind transparent object with black background – combine color of object with black, producing a rendering of the object that looks darker, not transparent
- Decrease in the effect of highlights – highlight and normal surface color blends with whatever is behind the object. – sometimes solved by dramatically increasing specular component.

Transparency viewer?

Refractivity viewer?

- (Ray Tracing to Render Appropriately)
- Physicists have determined experimentally the refractive index values for materials in the real world. (Straw in water)
- Air – index=1

- Refractive indices other than one cause light to bend and therefore create distortions as you look through the material in question. Indices larger than one bend light one way and smaller bend light the other.

Incandescence viewer?

- Some surfaces emit light instead of reflecting it. (light bulbs) Larger the incandescence value, the more the surface appears to “glow”

So… viewer?

- Where does that leave us?
- What can we do?

Improving the visual realism of our system viewer?

- Improving the visual realism of our scenes by accurate “coloring-in”
- Shading algorithms for determining color are based on the properties of light - a bit of physics
- Lambert shading (aka Faceted Shading)
- Gouraud shading
- Phong shading

Wireframe -> shaded polygon viewer?

How do we calculate what shade to fill our polygon? viewer?

- Now, we need to determine how all of this information is used by the software to render the darks and lights (the shading) of our scenes. We need a shading algorithm or shading model (well-thought out, logical procedure).
- All shading algorithms represent simplifications of what happens in the real world. We couldn’t possibly take into account all the complexities, so any algorithm must make assumptions that allow it to manage the otherwise overwhelming complexity of the process. The closer the assumptions are to the real-world situation, the more natural the renderings look; however, this means longer rendering times.

What factors influence shade? viewer?

- Color and strength of incoming illumination
- Color and texture (rough/smooth) of surface
- Relative positions and orientations of surface, light source and observer

Simplifying Assumptions viewer?

- Consider Intensity - forget color for the moment (we’ll return to that later)
- Consider white light (intensity of all colour equal)
- gives monochrome (black and white) picture

Simplifying Assumptions viewer?

- Assume light source is infinitely distant
- Parallel incoming illumination

Assume light source is infinitely distant viewer?

- No change in light intensity across the scene
- Because there is little difference between the intensity of light on one side of an object than another, we only have to do one calculation of intensity for the whole surface
- NOTE: This is not true for close illumination

Faceted Shading / Lambert Shading viewer?

- Any flat surface is the same color at every point on the surface. (Normally not true in the physical world, where a flat surface may be darker at one end than the other.) Rendering this way is fast, but gives objects a faceted look.
- However, the concept introduces a concept important to all shading algorithms. To calculate the color of a given surface, any shading algorithm must know whether the surface it is about to render faces or does not face the light because the surface will be lighter or darker depending on its orientation. To measure direction, programs measure the surface normals – a vector that is perpendicular to a surface at a given point on that surface. Usually represented as an arrow extending from the surface. For a flat surface, one surface normal suffices to indicate the orientation of the entire surface, since all points on a given flat surface face the same direction.
- Using surface normals, rendering programs can compute the exact angle at which a surface is oriented toward the light.

Faceted Shading / Lambert Shading viewer?

- Curved surfaces add complications; instead of one normal, many are required to determine orientation, because each point can face a different direction. Infinitely many different surface normals. Because of the added complications, many software packages convert curved surfaces to polygonal approximations just before rendering, to simplify calculation.
- Part of this process, triangulation subdivides curved surfaces into triangular polygons. Why? Triangles must be flat (other sided polygons don’t have to be)

Implementing: viewer?Components of illumination

- Light reaching the eye has to come from some source bouncing off the surface of the object. There are three components to consider…

Components of illumination viewer?

- that from diffuse illumination (incident rays come from all over, not just one direction) - Ed
- that from a point source which is scattered diffusely from the surface - Esd
- that from a point source which is specularly reflected. - Ess

E

E = Ed + Esd + Ess

Combining the illumination viewer?

- Combining all three components (diffuse illumination, diffuse reflection from a point source and specular reflection from a point source) gives us the following expression:
E = Ed + Esd + Ess

Diffuse illumination viewer?

- light that comes from all directions not from one particular source.
- Thinks about the light of a grey cloudy day as compared to a bright sunny one: cloudy day, - no shadows, light from the sun scattered by the clouds and seems to come equally from all directions

Diffuse illumination viewer?

- proportion of light reaching surface is reflected back to the observer.
- proportion is dependant on the properties (colour) of the surface and
- HAS NO DEPENDANCE on the angle of the viewer (that’s why its diffuse!).

Diffuse illumination viewer?

Id - incident illumination

Ed - observed intensity

Ed = R.Id

where:

- R is the reflection coefficient of the surface (0 <= R <=1) R is the proportion of the light that is reflected back out

Diffuse illumination viewer?

Diffuse illumination alone does not give visual realism. With no angular dependence in the light, a viewer will not be able to see any difference between a sphere and a disk.

Diffuse scattering from a point source viewer?

- When a light ray strikes a surface it is scattered diffusely (i.e. in all directions)
- Doesn’t change with the angle the observer is looking from

Diffuse scattering from a point source viewer?

- The intensity of the reflected rays is:
Esd = R.cos(i).Is

i - angle of incidence – the angle between the surface normal and the ray of light - 0 <= i <= 90

Esd -intensity of the scattered diffuse rays

Is - intensity of the incident light ray.

R - reflection coefficient of the surface (0 <= R <=1)

Diffuse scattering from a point source viewer?

- Esd - Doesn’t change with the angle the observer is looking from.
- i.e. - It doesn’t matter where you are looking at the surface from – that’s what diffuse means.

Specular reflection viewer?

- relationship between a ray from point source to the reflected ray is given by Lambert’s Law:

i = r

i - angle of incidence

r - angle of reflection

Specular reflection viewer?

- For a perfect reflector, all the incident light would be reflected back out in the direction of S. In fact, when the light strikes a surface it is scattered diffusely (i.e. in all directions):
- For an observer viewing as an angle (s) to the reflection direction S, some fraction of the reflected light is still visible (due to the fact that the surface isn’t a perfect reflector - some degree of diffusion takes place).
- How much?

Specular reflection viewer?

- The proportion of light visible is a function of
- the angle s (in fact it is proportional to cos(s)
- the quality of the surface
- angle of incidence i.

- We can define a coefficient w(i) the specular reflection coefficient - which is a function of the material of which the surface is made and of i . Each surface will have its own w(i).

Specular reflection coefficient viewer?

Specular reflection viewer?

- Ess is the intensity of the light ray in the direction of O
- nis a fudge factor: n=1 - rough surface (paper) n=10 smooth surface (glass)
- w(i) is usually never calculated - simply choose a constant (0.5?). It is actually derived from the physical properties of the material from which the surface is made.

Specular reflection viewer?

cosn(s) - This is in fact a fudge which has no basis in physics, but works to produce reasonable results. By raising cos(s) to the power n, what we do is control how much the reflected ray spreads out as it leaves the surface.

Combining the illumination viewer?

- Combining all three components (diffuse illumination, diffuse reflection from a point source and specular reflection from a point source) gives us the following expression:
E = Ed + Esd + Ess

Combining the illumination viewer?

E = R.Id + (R.cos(i) + w(i) + cosn(s)).Is

- E is the total intensity of light seen reflected from the surface by the observer.

Now, let’s go over what we know viewer?

- E – We’re trying to calculate E , so obviously that is unknown.
- R - is defined for each surface, so we need to add it as a variable to our surface class and define it when creating the surface, so its known.
- Id - The incident diffuse light - we can define this to be anything we like; 0 = darkness, for an 8-bit greyscale 255 = white. – Known
- cos(i) - we can work this out from L.N
- w(i) - we can define this to be anything between 0 and 1 - trial and error called for! - Known

Calculating E viewer?

- n - is defined for each surface, so we need to add it as a variable to our surface class and define it when creating the surface, so, basically its known.
- Is - the intensity of the incident point light source - again we can define this to be anything we like; 0 = darkness, for an 8-bit greyscale 255 = white. (Coming… a discussion of adding lights to our data model.) - Known.
- cos(s) ? Ah!

Calculating cos s viewer?

- cos s = S.O
- We know O
- Need S

Calculating cos s viewer? S = 2Q - L

- Thanks to Lambert’s law we know that S is the mirror of the incident ray about the surface normal. It can be found from some vector maths:

Lambert Shading viewer?

- Finally, we know all of the terms in the combined illumination equation and for any surface in our model we can calculate the appropriate shade.
E = Rid + (R.cos(i) + w(i) + cosn(s)).Is

- A program which implements this model of shading is said to implement Lambert Shading

Extending to color viewer?

- Each surface has a color which means it reflects different colours by different amounts, i.e. it has a different value of R for red, blue and green
Ered = Edred + Esdred + Essred

Egreen = Edgreen + Esdgreen + Essgreen

Eblue = Edblue + Esdblue + Essblue

Summary so far… viewer?

- Illumination has 3 components……
- Diffuse
- Diffuse from point source
- Specular

- …. Which can be summed
- Calculated for each polygon…
- Lambert Shading

Problems with Lambert viewer?

- Using a different colour for each polygon means that the polygons show up very clearly (the appearance of the model is said to be facetted.

Mach bands viewer?

Mach bands viewer?

- This is a physiological effect whereby the contrast between two areas of a different shade is enhanced as the border between the shades

Smooth Shading viewer?

- What is required is some means of smoothing the sudden transition in color.
- Various algorithms exist for this; amongst the best known are:
- Gouraud shading and
- Phong shading

- both named after their inventors.

Gouraud Shading viewer?

- The facetted appearance of a Lambert shaded model is due to each polygon having only a single color.
- To avoid this effect, it is necessary to vary the color across a polygon:

Gouraud Shading viewer?

- Gouraud shading makes smoother surfaces by artificially adjusting the surface normals of the polygons. First, position a normal at each vertex (corner) of each polygon. This normal is perpendicular to the surface of it’s own polygon. At each polygon where two poly’s touch, two surface normals (one for each poly) are created. Each is perpendicular to the corresponding surface. Then, algorithm averages these normals at a given vertex. This new, averaged normal is what the algorithm uses. Then, color at each vertex is calculated and interpolated across the surface, from one corner to another.
- Produces smooth progression along the surface, but you can still see the underlying polygonal structure at the edges of the surface.

Gouraud Shading viewer?

CRITICAL ISSUE OF SHADING ALGORITHMS – HIGHLIGHTS

- If light hitting surfaces bounces equally in all directions, diffuse surface. 100% = matte surface.
- If light bounces off only in one direction, we can see a highlight and the surface is specular (mirror-like)
- Gouraud shading does handle highlights.
- LIMITATIONS:
- interpolating one vertex color to another sometimes makes seams between polygons visible (esp. near bright light)

To Implement: viewer?

- Color must be calculated for each pixel of the polygon.
- The method we use to calculate the color results in the neighbouring pixels across the border between two polygons ending up with approximately the same colors.
- This blends the shades of the two polygons and avoids the sudden discontinuity at the border.

Gouraud shading viewer?

- based upon calculating a vertex normal
- an artificial construct (a true normal cannot exist for a point such as a vertex).
- can be thought of as the average of the normals of all the polygons that share that vertex

Gouraud shading viewer?

Gouraud shading viewer?

- Having found the vertex normals for each vertex of the polygon we want to shade, we can calculate the color at each vertex using the same formula that we did for Lambert Shading

Calculating the color of each pixel viewer?

- Interpolating “scan-line algorithm”
- Light intensity at P given by:

Phong Shading viewer?

- Like Gouraud, Phong shading averages surface normals at each vertex, but also computes interpolated normals between vertices. From those, it then calculates the colors of the surface. This yields a more accurate calculation of the colors, a more accurate rendering of the highlight, and invisible seams along the edges of the polygons. SLOWER than Gouraud

Phong Shading viewer?

LIMITATIONS:

- Assumes that the surface at any given point is perfectly smooth; this means light bouncing off that point is very concentrated, which makes highlights drop off abruptly (good for glossy metallics or plastics). Some materials (aluminum) have softer highlights because they are not perfectly smooth, meaning light bounces off in a more diffused way.
- If an object’s surface has sufficiently large microscopic irregularities, light hitting the surface bounces among them before leaving the surface, slightly diffused, toward our eye.

Phong Shading viewer?

- Phong shading is based on interpolating the surface normal vector
- The arrows (and thus the interpolated vectors) give an indication of the curvature of the smooth surface which the flat polygon is approximating to.

Phong Shading viewer?

- The interpolation is (like Gouraud shading) based upon calculating the vertex normals (red arrows)….
- …using these as the basis for interpolatation along the polygon edges (blue arrows) ….
- …..and then using these as the basis for interpolating along a scan line to produce the internal normals (green vectors)
- a color value calculated for each pixel based on the interpolated value of the normal vector.

Gouraud vs Phong viewer?

- Phong shading is requires more calculations, but produces better results for specular reflection than Goraud shading in the form of more realistic highlights.

Why? viewer?

- Consider the specular reflection term
Cosn s

If n is large (the surface is a good smooth reflector) and one vertex has a very small value of s (it is reflecting the light ray in the direction of the observer) whilst the rest of the vertices have large values of s - a highlight occurs somewhere on our polygon.

Why? viewer?

- With Gouraud shading, nowhere on the polygon can have a brighter color (i.e higher value) than a vertex
- unless the highlight occurs on or near a vertex, it will be missed out altogether.
- When it is near a vertex, its effect is spread over the whole polygon.
- With Phong shading however, an internal point may indeed have a higher value than a vertex. and the highlight will occur tightly focused in the (approximately) correct position

So… viewer?

- Lambert shading leads to a facetted appearance
- To get round this, use a smooth shading algorithm
- Gouraud and Phong shading produce good effects but at the cost of more calculations.
- Gouraud interpolates the calculated vertex colors
- Phong interpolates the calculated vertex normals
- Phong – slower but better highlights

Download Presentation

Connecting to Server..