1 / 163

Multiresolution in Terrain Modeling

Multiresolution in Terrain Modeling. Leila De Floriani, Enrico Puppo University of Genova Genova (Italy). Outline. Generalities on Terrain Models Data Structures Compression Techniques Algorithms for Terrain Generalization Multiresolution Representations:

Download Presentation

Multiresolution in Terrain Modeling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Multiresolution in Terrain Modeling Leila De Floriani, Enrico Puppo University of Genova Genova (Italy)

  2. Outline • Generalities on Terrain Models • Data Structures • Compression Techniques • Algorithms for Terrain Generalization • Multiresolution Representations: • A general framework for multiresolution • Classification and review of multiresolution models • Applications

  3. Terrain Data • Elevation data are given at a finite set of points in a geo-referenced domain • 2D locations + elevation define a sample data set • Points can be: • regularly spaced in the x and y coordinates • irregularly distributed • Terrain data acquisition • Ground survey methods using electronic tacheometers • Photogrammetric methods working on aerial and satellite images • SAR interferometry • Digitization of existing contour maps (through contour following)

  4. Digital Terrain Model (DTM) • A terrain can be modeled in the continuous as a surface described by a function z=f(x,y) defined in a two-dimensional domain D • DTM: discrete model of a terrain based on a given set of elevation data • DTM characterized by • a subdivision of the domain into two-dimensional cells (usually with vertices at the sample points) • a collection of interpolating functions defined at cells of the subdivision

  5. Classification of DTMs • Classification based on the topology of the underlying subdivision: • Regular Square Grids (RSGs) • Triangulated Irregular Networks (TINs)

  6. Regular Square Grids (RSGs) • The domain subdivision is a rectangular grid • For each rectangle r in the grid: • elevations of vertices of r are interpolated by linear functions along each edge of r • a bilinear function is obtained as the tensor product of such four linear functions • Straightforward data structures and manipulation techniques • Only regularly spaced data  resampling required • Not adaptive to characteristics of the surface  massive data sets at high resolution are necessary to achieve accuracy

  7. Triangulated Irregular Networks (TINs) • The domain subdivision is a triangulation • For each triangle t, elevations of itsthree vertices areinterpolated by a linear function • Capability of adapting to the roughness of terrain (adaptive models) • Possibility of explicitly including relevant points (peaks and pits) and lines (ridges and valleys) • Ease of update • More complex data structures and manipulation techniques

  8. Data Structures

  9. Data Structures for RSGs • The basic data structure is a rectangular matrix (raster) of elevation values: • Geographic coordinates of each entry are computed directly from its position in the matrix • Elevation is given by the value stored at the entry • More sophisticated data structures are inherited from the literature on digital images for: • Compression • Block decomposition • Hierarchical decomposition

  10. Data Structures for TINs Two types of information encoded: • Geometrical • position in space of the vertices • surface normals at the vertices (optional) • Topological • mesh connectivity • adjacency relations among triangles of the mesh (optional)

  11. ...Data Structures for TINs... List of triangles: • It maintains an explicit list of triangles composing the mesh • For each triangle, it maintains its three vertices by explicitly encoding the geometrical information associated with the vertices • Connectivity described through a relation between a triangle and all its vertices • Drawback: • Each vertex is repeated for all triangles incident in it • Storage cost: • In a triangle mesh with n vertices, there are about 2n triangles • Total cost of data structure: about 18n floats, if geometric information associated with a vertex is just its position in space

  12. ...Data Structures for TINs... Indexed data structure: • List of vertices + list of triangles + relation between triangles and vertices • For each vertex: its geometrical information • For each triangle: references to its three vertices • Drawback: • No adjacency information • Storage cost: 6n log n bits + 3n floats (cost of storing geometrical information) since a vertex reference for a triangle requires log n bits

  13. P3 t3 t2 t P1 t1 P2 ...Data Structures for TINs... Indexed data structure with adjacencies: • list of vertices (with their geometrical information) + list of triangles • for each triangle: references to its three vertices + references to its three adjacent triangles • Storage cost: (12n log n + 6n) bits + 3n floats (cost of storing geometrical information), since each triangle reference requires (log n + 1) bits t: (P1,P2,P3) (t1, t2, t3)

  14. ...Data Structures for TINs... Comparison of the three data structures: n = number of vertices, let one float = 32 bits • list of triangles: 18n floats • if n = 216 ==> 18*216 floats = 578*216bits • indexed data structure: 6n log n bits + 3n floats • if n = 216 ==> 96*216 bits+ 3*216 floats = 150*216bits • indexed data structure with adjacencies: (12n log n + 6n) bits+ 3n floats • if n = 216 ==> 198*216 bits + 3*216 floats = 252*216bits

  15. Compression Techniques

  16. Why Terrain Compression? • Availability of large terrain datasets in Geographic Information Systems • Need for: • Faster transmission of terrain models • Faster I/O of terrain models from/to disk • Enhanced rendering performances: limitations on on-board memory and on data transfer speed • Lower costs of memory and of auxiliary storage • Objective: • design compact structures for encoding a terrain model as a sequence of bits (bitstream)

  17. ...Why Terrain Compression?... • Compression methods for TINs aimed at two complementary tasks: • Compression of geometry: efficient encoding of numerical information attached to the vertices, i.e., position, surface normal, color, texture parameters • Compression of mesh connectivity: efficient encoding of the mesh topology

  18. Compression of Geometry [Deering, 1995; Chow, 1997] • Positions, normals and scalar attributes quantized to significantly fewer than 32 bits (single-precision IEEE floating point) with little loss in accuracy • Example: quantization for position information: • Geometry normalized to a unit cube • Positions quantized by truncating the less significant bits of position components • Optimizations: • Positions are "delta-encoded”: just the difference between a vertex position and that of its predecessor in the bitstream is encoded • Quantization assignment can be done by partitioning the triangle mesh into parts of similar detail based on triangle size and curvature

  19. Compression of Connectivity Two kinds of compression techniques: • Direct techniques: Goal: minimizing the number of bits needed to encode connectivity • Progressive techniques: Goal: an interrupted bitstream provides a description of the whole terrain at a lower level of detail

  20. Direct Methods Compression methods for rendering: • Triangle strips (and triangle fans) used in graphics API (e.g., OpenGL) • Generalized triangle meshes [Deering 1995; Evans et al., 1996; Chow, 1997; Bar Yehuda and Gotsman 1996] • Topological surgery [Taubin and Rossignac, 1996] A compression method for transmission: • Sequence of triangles in a shelling order [De Floriani, Magillo and Puppo, 1998]

  21. 7 . . . . 6 5 4 3 2 1 ...Direct Methods... Triangle Strips • Each strip is a sequences of vertices • Each triangle in a strip has its vertices at three consecutive positions • A TIN is encoded as a collection of strips • Drawbacks: • Each vertex is encoded twice on average • It is difficult to obtain a small number of long strips [Evans et al., 1996]

  22. 7 . . . . 3 6 4 2 5 1 4 5 3 . . . . 6 2 1 ...Direct Methods... Generalized Triangle Meshes [Deering, 1995] • Sequence of vertices with alternate strip-like and fan-like behavior • Behavior at each vertex specified by a bit code • A small buffer allows to reuse some past vertices (indexing) • A TIN is encoded as a collection of generalized strips • Cost: ~11 bits per vertex for connectivity Strip-like (zig-zag) Fan-like (turning)

  23. ...Direct Methods... Topological Surgery [Taubin and Rossignac, 1996] • It cuts a mesh and opens into a connected set of triangles shaped as a tree (triangle spanning tree) • The edges along which the mesh is cut form another tree (vertex spanning tree) • The bitstream produced by the method contains the two trees • Compression/decompression algorithms are rather complicated

  24. ...Direct Methods... A Compression Method based on Shelling [De Floriani, Magillo and Puppo, 1998] • Based on a shelling order: a sequence of all the triangles in the mesh with the property that the boundary of the set of triangles corresponding to any proper subsequence forms a simple polygon • Encoding: four 2-bits codes per edge: SKIP, VERTEX, LEFT, RIGHT

  25. ...Direct Methods... ...A Compression Method based on Shelling... • Initialize current polygon at an arbitrary triangle of the TIN • Loop on the edges of the current polygon, and for each edge e try to add the triangle t externally adjacent to e : • if t contains a new vertex, send a VERTEX code and vertex information • if t is bounded by the edge either preceding or following e on the current polygon, then send either a LEFT or a RIGHT code • if t either cannot be added or does not exist, then send a SKIP code

  26. ...Direct Methods...

  27. ...Direct Methods... • Properties of the Shelling Method: • Every vertex is encoded only once • Each edge is examined at most once • Compression and decompression algorithms: • fast • no numerical computations • conceptually simple and easy to implement • Adjacencies between triangles are reconstructed directly from the sequence at no additional cost • Cost: • in theory: at most 6 bits of connectivity per vertex • in practice: less than 4.5 bits of connectivity per vertex

  28. ...Direct Methods... Experimental Results: Exp #vert #tri #code bits compression bits /vert time(tri/s) U1 42943 85290 182674 4.2538 1.644(51879) U2 28510 56540 123086 4.3173 1.077(52483) U3 13057 25818 57316 4.3897 0.479(53899) U4 6221 12240 27180 4.3690 0.215(56930)

  29. Progressive Compression • Efficient encoding of the mesh produced by a simplification algorithm • A sequence of progressive LODs generated by iteratively applying a destructive operator which removes details from a mesh • An inverse constructive operator recovers details • Encoding: • coarsest mesh produced in the simplification process + sequence of construction operations

  30. ...Progressive Compression... • Each LOD can be seen as a form of lossy compression of the original mesh • There is a trade-off between loading/transmission times and loss of detail • Compression rates are usually lower than those achieved by direct techniques

  31. ...Progressive Compression... Progressive Compression Methods • Progressive meshes [Hoppe, 1996] • destructive operator = edge collapse • Sequence of edge swaps [De Floriani, Magillo and Puppo, 1998] • destructive operator = vertex removal • Sequence of ordered vertex sequences [Snoeyink and van Kreveld, 1997] • destructive operator = removal of a set of vertices

  32. ...Progressive Compression... Progressive Meshes [Hoppe, 1996] • Edge collapse: • replace an edge e with a vertex v1 and the two triangles sharing e with two edges incident at v1 • Vertex split (inverse operation): • expand a vertex v1 into an edge e=v1v2 and two edges e1 and e2 among those incident at v1 into two triangles e1 v1 e v1 v2 e2

  33. ...Progressive Compression... Progressive Meshes [Hoppe, 1996] • Encoding: • new vertex v2 • reference to v1 • code specifying the position of e1 and e2 in the set of edges incident at v1 • Properties: • suitable to support geomorphing • Cost: • n(log n + log(b(b-1))) bits of connectivity, where b = maximum degree of a vertex at any step • for instance, for n=216 and b=23 ==> about 21.8*216bits of connectivity

  34. ...Progressive Compression... Sequence of Edge Swaps [De Floriani, Magillo, Puppo, 1998] • Based on the iterative removal of a vertex of bounded degree (less than a constant b) selected according to an error-based criterion: • The vertex which causes the smallest loss of accuracy (least relevant detail) is always selected • The polygonal hole P left by removing vertex v is re-triangulated • The inverse constructive operator inserts vertex v and recovers the previous triangulation of P

  35. ...Progressive Compression... Sequence of Edge Swaps • The old triangulation T is recovered from the new one T' by first splitting the triangle t of T' containing vertex v and then applying a sequence of edge swaps T’ T

  36. ...Progressive Compression... Sequence of Edge Swaps • Information encoded for each removed vertex v: • a vertex w and an index indicating a triangle around w (they define the triangle t of T' containing v) • the packed sequence of edge swap which generates T from T' Vertex: w Triangle index: 0 Swap sequence: 2 0 2 T

  37. ...Progressive Compression... Sequence of Edge Swaps • Cost: • n(log n +log b+ log((b-1)!)-1) bits of connectivity information • for instance, for n=216 and b=23 ==> about 31.4*216bits of connectivity • Properties: • The criterion used in the re-triangulation is encoded in the sequence of swaps: more general than other progressive methods • Suitable to encode TINs based on Delaunay triangulations, data dependent triangulations, constrained triangulations, etc.

  38. ...Progressive Compression... Snoeyink and van Kreveld's Method • It applies to TINs based on Delaunay triangulation • LOD generation process: at each step, a maximal set of independent vertices (i.e., vertices which are not connected by an edge) of bounded degree is removed • The process of removing a set of vertices terminates in a logarithmic number of steps

  39. ...Progressive Compression... Snoeyink and van Kreveld's Method • Encoding: • the vertices removed at the same step form a sorted sequence • each sequence terminates with an end-of-phase code • Cost: at most log 2n bits to encode connectivity • Compression and decompression methods are quite involved and require heavy numerical computations (a Delaunay triangulation must be computed when decompressing) • Only suitable for Delaunay triangulation

  40. Algorithms forTerrain Generalization

  41. Why Terrain Generalization? • Terrain models at high resolution are often too large to be processed: • Models must fit into computer memory • Processing time must be reasonable • Performance may be critical: • Terrain visualization needs real time rendering • Some tasks in terrain analysis are computationally expensive (e.g., watershed, viewshed) • Not all tasks need a model at the same accuracy

  42. ...Why Terrain Generalization?... • More accurate representation  more cells • More cells  higher storage requirements and processing time • Objective of generalization: find an optimal trade-off between the size of a model and its accuracy

  43. Terrain generalization • Problem Statement : given a terrain model [either an RSG or a TIN], find a TIN representing the same terrain and having smaller size and a small approximation error • Major issues: • size / accuracy ratio • shape of cells in the generalized model (no slivers!)

  44. Approximation error • Approximation of a terrain with a TIN: each triangle approximates a portion of terrain within a given accuracy • Given a sample data set, each triangle of the TIN approximates the portion of terrain spanned by points whose projections lie within the projection of the triangle • Error at a point is measured by distance from its vertical projection on the triangle

  45. ...Approximation error... • Errors at all points in the sample data set are combined to define the accuracy of approximation • Different combinations may be used for different needs: • Maximum error over all points • Average error • Root Mean Square error • Other criteria may be used: difference in volume, curvature, surface normal, etc...

  46. Triangle quality measures • A TIN should have triangles as much compact as possible • Elongated triangles (slivers)cause numerical errors and unpleasant visual effects • Compactness of a triangle [Gueziec, 1995]: where Ais the area and l0 , l1 , l2are edge lengths. • Compactness is 1 for an equilateral triangle and decreases while a triangle becomes elongated

  47. Generalization goals Two alternative optimization problems can be defined: • Min - # : build the approximation of minimum size (e.g. minimum number of vertices) which satisfies a given error bound • accuracy is a constraint • size is the goal • Min - e : build an approximation having minimum error and having a given size • size is a constraint • accuracy is the goal

  48. Theoretical results • Optimization problems Min-# and Min-e are NP-hard [Agarwal and Suri, 1995] • There exist just few approximation algorithms for problem Min-e that guarantee a bound on the size of the solution with respect to optimal [Agarwal and Desikan, 1997]: • Algorithms are difficult to code and computationally expensive • Sub-optimality bounds are coarse • Empirical results are not substantially better than those obtained with heuristics

  49. Practical methods All practical methods are based on heuristics: • Selection of point features [Fowler and Little, 1979; Chen ad Guevara, 1989; Scarlatos, 1990] • Iterative refinement of a TIN [Franklin, 1973; Fowler and Little, 1979; De Floriani et al., 1984; De Floriani et al., 1985; Rippa, 1992] • Iterative simplification of a TIN [Lee, 1989; deBerg and Dobrindt, 1995; Hoppe, 1996] • Methods based on greedy construction of a TIN [Silva et al., 1995]

  50. Methods based on selection of points • General approach: • Extract a subset of the sample data set formed by points that are likely to be relevant to describe the morphology of terrain • Compute a TIN having vertices at selected points. • Topographic features [Fowler and Little, 1979]: • Surface-specific points at peaks, pits, ridges,valleys and passes are extracted from an input RSG using a local method [Peucker and Douglas, 1975] • Points along lineal features (ridges and valleys) are organized into chains, which are simplified by a line-thinning method [Douglas and Peucker, 1973] • A Delaunay triangulation of the selected points is computed

More Related