- 79 Views
- Uploaded on
- Presentation posted in: General

CS 445 / 645 Introduction to Computer Graphics

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

CS 445 / 645Introduction to Computer Graphics

Lecture 20

Antialiasing

- Used to model a object that reflects surrounding textures to the eye
- Polished sphere reflects walls and ceiling textures
- Cyborg in Terminator 2 reflects flaming destruction

- Texture is distorted fish-eye view of environment
- Spherical texture mapping creates texture coordinates that correctly index into this texture map

- http://developer.nvidia.com/object/Cube_Mapping_Paper.html

- Pipelining of multiple texture applications to one polygon
- The results of each texture unit application is passed to the next texture unit, which adds its effects
- More bookkeeping is required to pull this off

- A pixel is not…
- A box
- A disk
- A teeny tiny little light

- A pixel is a point
- It has no dimension
- It occupies no area
- It cannot be seen
- It can have a coordinate

A pixel is more than a point, it is a sample

- Most things in the real world are continuous
- Everything in a computer is discrete
- The process of mapping a continuous function to a discrete one is called sampling
- The process of mapping a continuous variable to a discrete one is called quantization
- Rendering an image requires sampling and quantization

- We tried to sample a line segment so it would map to a 2D raster display
- We quantized the pixel values to 0 or 1
- We saw stair steps, or jaggies

- Instead, quantize to many shades
- But what sampling algorithm is used?

- Shade pixels according to the area covered by thickened line
- This is unweighted area sampling
- A rough approximation formulated by dividing each pixel into a finer grid of pixels

- Primitive cannot affect intensity of pixel if it does not intersect the pixel
- Equal areas cause equal intensity, regardless of distance from pixel center to area

- Unweighted sampling colors two pixels identically when the primitive cuts the same area through the two pixels
- Intuitively, pixel cut through the center should be more heavily weighted than one cut along corner

Intensity

W(x,y)

x

- Weighting function, W(x,y)
- specifies the contribution of primitive passing through the point (x, y) from pixel center

- An image is a 2D function I(x, y) that specifies intensity for each point (x, y)

- Our goal is to convert the continuous image to a discrete set of samples
- The graphics system’s display hardware will attempt to reconvert the samples into a continuous image: reconstruction

- Simplest sampling is on a grid
- Sample dependssolely on valueat grid points

- Multiply sample grid by image intensity to obtain a discrete set of points, or samples.

Sampling Geometry

- Some objects missed entirely, others poorly sampled

- Supersampling
- Take more than one sample for each pixel and combine them
- How many samples is enough?
- How do we know no features are lost?

- Take more than one sample for each pixel and combine them

150x15 to 100x10

200x20 to 100x10

300x30 to 100x10

400x40 to 100x10

- Average supersampled points
- All points are weighted equally

- Points in pixel are weighted differently
- Flickering occurs as object movesacross display

- Overlapping regions eliminates flicker

- Convert spatial signal to frequency domain

Intensity

Pixel position across scanline

Example from Foley, van Dam, Feiner, and Hughes

- Represent spatial signal as sum of sine waves (varying frequency and phase shift)
- Very commonlyused to representsound “spectrum”

- Convert spatial domain to frequency domain
- Let f(x) indicate the intensity at a location in space, x (pixel value)
- u is a complex number representing frequency and phase shift
- i = sqrt (-1) … frequently not plotted

- F(u) is the amplitude of a particular frequency in a signal
- In this case the signal is f(x)

- Examples of spatial and frequency domains

- The ideal samples of a continuous function contain all the information in the original function if and only if the continuous function is sampled at a frequency greater than twice the highest frequency in the function

- The lower bound on the sampling rate equals twice the highest frequency component in the image’s spectrum
- This lower bound is the Nyquist Rate

- If you know a function contains no components of frequencies higher than x
- Band-limited implies original function will not require any ideal functions with frequencies greater than x
- This facilitates reconstruction

- Samples may not align with peaks

- When sampling below Nyquist Rate, resulting signal looks like a lower-frequency one
- With no knowledge of band-limits, samples could have been derived from signal of higher frequency

- We know we are limited in the resolution of our screen
- We want the screen (sampling grid) to have twice the resolution of the signal (image) we want to display
- How can we reduce the high-frequencies of the image?
- Low-pass filter
- Band-limits the image

- In frequency domain
- If signal is F(u)
- Just chop off parts of F(u) in high frequencies using a second function, G(u)
- G(u) == 1 when –k <= u <= k0 elsewhere
- This is called the pulse function

- In spatial domain
- Multiplying two Fourier transforms in the spatial domain corresponds exactly to performing an operation called convolution in the spatial domain
- f(x) * g(x) = h(x) the convolution of f with g…
- The value of h(x) at x is the integral of the product of f(x) with the filter g(x) such that g(x) is centered at x

- The pulse (frequency) == sinc (spatial)

- In spatial domain
- Sinc: sinc(x) = sin (px)/px
- Note this isn’t perfect way to eliminate high frequencies
- “ringing” occurs

- Note this isn’t perfect way to eliminate high frequencies

- Slide filter along spatial domain and compute new pixel value that results from convolution

- Sometimes called a tent filter
- Easy to compute
- just linearly interpolate between samples

- Finite extent and no negative values
- Still has artifacts

- Nvidia GeForce2
- OpenGL: render image 400% larger and supersample
- Direct3D: render image 400% - 1600% larger

- Nvidia GeForce3
- Multisampling but with fancy overlaps
- Don’t render at higher resolution
- Use one image, but combine values of neighboring pixels
- Beware of recognizable combination artifacts
- Human perception of patterns is too good

- Multisampling but with fancy overlaps

- Multisampling
- After each pixel is rendered, write pixel value to two different places in frame buffer

- After rendering two copies of entire frame
- Shift pixels of Sample #2 left and up by ½ pixel
- Imagine laying Sample #2 (red) over Sample #1 (black)

- Resolve the two samples into one image by computing average between each pixel from Sample 1 (black) and the four pixels from Sample 2 (red) that are 1/ sqrt(2) pixels away

- No AA Multisampling

GeForce3 - Multisampling

- 4x Supersample Multisampling

- ATI SmoothVision
- Programmer selects samping pattern
- Some supersamplingthrown in

- ATI Radeon 8500 NVidia GF3 Ti 500 AA Off 2x AA Quincunx AA

http://www.anandtech.com/video/showdoc.html?i=1562&p=4

- 3dfx Multisampling
- 2- or 4-frame shift and average

- Tradeoffs?