1 / 27

Visible Surface Detection

Visible Surface Detection. Shmuel Wimer Bar Ilan Univ., Eng. Faculty Technion, EE Faculty. Back-Face Detection. view plane. Depth-Buffer Methods. Three surfaces overlapping pixel position (x,y) on the view plane. The visible surface, S 1 , has the smallest depth value.

zinna
Download Presentation

Visible Surface Detection

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Visible Surface Detection Shmuel Wimer Bar Ilan Univ., Eng. Faculty Technion, EE Faculty

  2. Back-Face Detection

  3. view plane Depth-Buffer Methods Three surfaces overlapping pixel position (x,y) on the view plane. The visible surface, S1, has the smallest depth value.

  4. Efficient Depth Calculation Given the depth value in a vertex of a polygon, the depth of any other point in the plane containing the polygon can be calculated efficiently (additions only).

  5. top scan line y scan line y+1 scan line x x` Scanning can start at the top vertex of polygon and obtain the first depth value. Progressing from scan line to next one y is decreased by 1, which eases the depth calculation of first pixel along the scan line.

  6. background opaque surface foreground transparent surface A-Buffer Method It is named so since z-buffer is used for depth semantics and in some sense a-buffer has the nature of the other end of alphabet. It is an extension using antialiasing, area-averaging visibility detection method, supporting transparency.

  7. opaque opaque transparent Ray Tracing Projection reference point

  8. Highly realistic. It detects visibility, transparency, shadows, illumination effects, and generates perspective views. Computational intensive.

  9. transparent opaque opaque Projection reference point transparent Binary Ray-Tracing Tree

  10. Left branches represent reflection paths, right branches represent transmission paths. • A path is terminated if any of the following occurs: • The ray intersects no surface. • The ray intersects a light source that is not a reflecting surface. • The tree reached a depth limit.

  11. u – incoming ray R – unit reflected ray N – unit surface normal L – unit vector pointing to light source H – unit vector half way between –u and L

  12. The paths along the direction of L is referred as the shadow ray. source. If any object intersects the shadow ray between the surface and the point light source, the surface position is in shadow with respect to that source.

  13. In transparent surface light is transmitted through material. The unit transmission vector T is obtained from u and N and refraction indices.

  14. Calculating Pixel Intensity It works bottom up, where the intensity at a child node is added to parent with an attenuation relative to the distance of the child’s surface from the parent’s surface. The intensity assigned to the pixel is the sum of attenuated intensities at the root. If the primary ray for a pixel doesn’t intersect with an object of the scene, the ray-tracing tree is empty and the pixel is assigned the background intensity.

  15. projection reference point pixel point Ray-Surface Intersection Calculations

  16. Ray-tracing is computations intensive, requiring root finding to get the value of parameter s at ray-surface incidence point. Efficient techniques exists for spherical, planar and also spline surfaces. Ray-Sphere Intersections

  17. If the discriminant is negative then either the ray doesn’t intersect the sphere or the sphere is behind P0. In either case the sphere is eliminated from further consideration.

  18. Ray-Polyhedron Intersections Computing intersection with polyhedron is expensive. It is useful to include polyhedron in the smallest enclosing sphere and exclude any intersection calculation if the ray doesn’t intersect the sphere.

  19. To find whether the point is within the face an odd-even test is in order. Among all the faces intersected by the ray, the one with the smallest s is chosen. If there’s no intersection with any face, the polyhedron is excluded from further consideration.

  20. Reducing Intersection Calculations Ray tracer consumes 95% time on intersection calculations. It is possible to enclose groups of adjacent objects in spheres, calculate intersection with enclosing sphere first, and proceed to real objects only if intersection exists. Another approach is space-subdivision, where the entire scene is enclosed with a cube. The cube is successively divided into smaller cubes until the number of spaces contained in a cube exceeds a predefined limit or the size of cube reaches some threshold.

  21. pixel ray Cells are traced through the individual cells. intersection tests are performed only for cells containing surfaces. The first intersected surface is the visible for that ray.

  22. 6 1 5 2 0 4 7 3 8 Consider the cube plane attempted for exit point calculation. The cube face divides the plane into nine sectors. If the exit point is falling in sector 0 we are done. If it is falling in any of 1, 3, 2 or 4 we know the face containing the exit point and then can find it. In case of 5, 6, 7 and 8 further test is required.

More Related