- 76 Views
- Uploaded on
- Presentation posted in: General

H331: Computer Graphics

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

H331: Computer Graphics

Philip Dutré

Department of Computer Science

Wednesday, February 19

- Graphics programming
- Book: Chapters 3, 4, 10

- Pre-practicum available
- http://www.cs.kuleuven.ac.be/~graphics/H331/

- SIGGRAPH student volunteers!
- Deadline: Wednesday February 26

- SIGGRAPH San Diego July 27 – 31
- http://www.siggraph.org/s2003/
- Student volunteers! (Deadline: February 26)

- How to transform the 3D world to a 2D image?
- 3 aspects:
- Objects: exist in space, independent of viewer
- Viewer: camera, human, ….
- Lights: shading, shadows, …

Objects (points, lines, polygons)

Described by vertices

Lights (e-m spectrum)

(350-780 nm)

Viewer = camera

Projection plane in front of center-of-projection:

Clipping: looking through a window

- Synthetic camera is basis of 3D API
- OpenGL, PHIGS, Direct 3D, VRML,JAVA-3D, GKS, …

- We need to functions to specify:
- Objects: vertices that describe points, lines, polygons
- Camera: position, orientation, width, height
- Light sources: position, color
- Materials: reflection characteristics

glBegin(GL_POLYGON)

glVertex3f(0.0, 0.0, 0.0);

glVertex3f(0.0, 1.0, 0.0);

glVertex3f(0.0, 0.0, 0.1);

glEnd();

…

gluLookAt(posx, posy, posz, atx, aty, atz, …);

glPerspective(view_angle, …);

- 3D API performs modeling + rendering
- But … modeling can also be done ‘off-line’
- Write model to file
- Read file in 3D API and transform to 3D API modeling commands

- RenderMan (Pixar)
- Prepares off-line model for rendering
- Rendering takes ‘converted’ model

- 3D ‘world’ coordinates 2D ‘screen’ coordinates

- 3D vertex 2D pixel

Transform to camera coordinate system

Clip away things we don’t see in the camera window

3D coordinates 2D coordinates

Transform to pixels in the frame buffer

- Little algorithm:
- Pick 3 points V0, V1, V2
- P0 = random point
- Pk = midpoint between Pk-1 and random(V0, V1, V2)
- Plot all Pk

- Given: window on the screen
- Graphics API (e.g. OpenGL) has something of the form:
plotPixel(int x, int y)

window

- plotPixel(289,190)
- plotPixel(320,128)
- plotPixel(239,67)
- plotPixel(194,101)
- plotPixel(129,83)
- plotPixel(75,73)
- plotPixel(74,74)
- plotPixel(20,10)

Y

window

y

X

x

plotPixel(x,y)

screen

- Coordinates are expressed in screen space, but objects live in (3D) world space
- Resizing window implies we have to change coordinates of objects to be drawn
- We want to make a separation between:
- values to describe geometrical objects
- values needed to draw these objects on the screen

- Specify points to OpenGL

glVertex*( … )

glVertex2i( … )glVertex3i( … )

glVertex2f( … )glVErtex3f( … )

glBegin(GL_LINES);

glVertex2f(x1, y1);

glVertex2f(x2, y2);

glEnd();

glBegin(GL_POINTS);

glVertex2f(x1, y1);

glVertex2f(x2, y2);

glEnd();

For (k=0; k<500; k++) {

…

// compute point k

x = …;

y = …;

glBegin(GL_POINTS);

glVertex2f(x, y);

glEnd();

}

glFlush();

- OpenGl = set of libraries

- OpenGl supports geometric primitives and raster primitives

- Geometric primitives are defined by vertices
- GL_POINTS
- GL_LINES
- GL_LINE_STRIP, GL_LINE_LOOP

- Closed loops = polygons
- Polygons: describe surfaces

- GL_POLYGON, GL_QUADS, …

- Scene is independent of camera
- gluOrtho2D(left, tight, bottom, top)

void triangle(point3 a, point3 b, point3 c) {

glBegin(GL_POLYGON)

glVertex3fv(a);

glVertex3fv(b);

glVertex3fv(c);

glEnd();

}

void tetrahedron () {

glColor3f(1.0,0.0,0.0);

triangle(v[0], v[1], v[2]);

…

}

- Hidden surfaces?
- Z-buffer
- Keep depth for each pixel

- Initialize!
- glClear(GL_COLOR_BUFFER_BIT);
- glClear(GL_DEPTH_BUFFER_BIT);
- …
- glFlush();

- World window:specifies what part of the world should be drawn
- Viewport:rectangular area in the screen window in which we will draw

window

screen window

world window

viewport

window

Vt

Wt

Wb

Vb

Vl

Vr

Wl

Wr

window

Maintain proportions!

Vt

Wt

Wb

Vb

Vl

Vr

Wl

Wr

x

sx

Vl

Vr

Wl

Wr

- If x = Wl, then sx = Vl
- If x = Wr, then sx = Vr
- If x = f*(Wr-Wl), then sx = f*(Vr-Vl)
- If x < Wl, then sx < Vl
- If x > Wr, then sx > Vr
- … also for y and sy

- Pick size automatically

world window

window

H

Aspect ratio R

W

R > W/H

window

H

Aspect ratio R

W

R < W/H

- Lines outside of world window are not to be drawn.
- Graphics API clips them automatically.
- But clipping is a general tool in graphics!

A

B

clipSegment(…):

- Return 1 if line within window
- Return 0 if line outside window
- If line partially inside, partially outside: clip and return 1

C

E

D

- Trivial accept/reject test!

Trivial reject

Trivial accept

- 4 bits:
TTFF

Left of window?

Above window?

Right of window?

Below window?

- Trivial accept: both endpoints are FFFF
- Trivial reject: both endpoints have T in the same position

TTFF

FTFF

FTTF

TFFF

FFFF

FFTF

TFFT

FFFT

FFTT

- If segment is neither trivial accept or reject:
- Clip against edges of window in turn

Trivial accept

- int clipSegment (point p1, point p2)
Do {

If (trivial accept) return (1)

If (trivial reject) return (0)

If (p1 is outside)

if (p1 is left) chop left

else if (p1 is right) chop right

…

If (p2 is outside)

…

} while (1)

- What is an image?
- Array of pixels

- How to convert lines and polygons to pixels?
- Continuous to discrete
- Scan conversion

- Early displays were vector displays
- Electron beam traces lines
- Image is sequence of endpoints
- Wireframes, no solid fills

- Raster displays
- Electron beam traces regular pattern
- Image is 2D array of pixels
- Fast, but discretisation errors

- Every pixel has b bits for color
- B&W: 1 bit
- Basic colors: 8, 15, 16, 24 bits
- High-end: 96 bits

- Raster image is stored in memory as a 2D array of pixels = framebuffer
- The color of each pixel determines the intensity of the beam
- Video hardware scans framebuffer at 60Hz
- Changes in framebuffer show on screen => double buffering
- Switch buffers when one buffer is finished

Framebuffer

(double buffer)

display

Video controller

Graphics software (rasterizer)

- How to rasterize a line, once its 2D screen coordinates are known?
- Given: endpoints of a line
- What pixels to draw?

- find the pixels closest to the ideal line
- assume slope m 1: illuminate one pixel per column, work incrementally
- if m1 : x y.

y = y1;

for (i = x1; i<=x2; i++) {

plotPixel(i, round(y));

y += m;

}

- Inefficient: compute round(y) for each integer x and floating point addition
- Bresenham’s algorithm: only integer arithmetic
- Standard for most HW+SW rasterizers

- What’s the next pixel?
- Decision variabled = a – bif (d>0) …else …
- Or d = Dx(a-b)

- dk+1 = dk – 2Dy ordk+1 = dk – 2(Dy-Dx)