1 / 15

Tracking Turbulent 3D Features

This paper discusses the application of visualization techniques in tracking turbulent 3D features, such as storms, hurricanes, ocean waves, and clouds. It covers segmentation, region growing, feature extraction, classification, and feature tracking methods. The goal is to identify and analyze observed phenomena in scientific simulations or practical circumstances.

rlovelace
Download Presentation

Tracking Turbulent 3D Features

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Tracking Turbulent 3D Features Lu Zhang Nov. 10, 2005

  2. Motivations • Introduction Visualization techniques can help scientists to identify observed phenomena both in scientific simulation or practical circumstance. • Application Storm, Hurricane, Ocean wave, Cloud…. Common features: • multiple evolution • time-varying • huge dataset • non-rigid

  3. Outline • Segmentations and Region growing • Thresholding • Region growing • Features extraction • Different features • Classification and Feature tracking • Tracking methods • Classes and structures

  4. Overview • The original dataset • Flowchart and Modulus Segmentation Input images Feature extraction Basic features Classification classes Graph building Directed acyclic graph

  5. Segmentations and Region growing • Thresholding Global thresholding vs optimal thresholding • Region Growing method Iterative region growing method [1]

  6. Segmentations and Region growing • Region Growing • Basic features timeID viewID x y R G B

  7. Features extraction • Feature structure Aftergaining region information from segmentation stage, we can browse each region to find basic features • Areas – The count of all pixels in the region. • Center of Gravity –The center of all points in one region. • Diameter - Diameter is the distance between two points on the boundary of the region whose mutual distance is the maximum. • Perimeter - The number of pixels under each edge label. • Fourier descriptors – Fourier transform of boundary points.

  8. Features extraction • Output from Feature extraction module viewID mx my areas labeling timeID …..

  9. Classification /Feature tracking • Classification After feature extraction module, we can gain a list of feature information for each region in different views. • One Assumption Because all the views have strictly time order, we can assume the difference between a pair of views should not vary too much.

  10. Classification /Feature tracking • Evolution in time-varying images There are five different changes of regions between a pair of views. • Continuation: one feature continues from dataset at t1 to the next dataset at t2 • Creation: new feature appear in t2 • Dissipation: one feature weakens and becomes part of the background • Bifurcation: one feature in t1 separates into two or more features in t2. • Amalgamation: two or more features merge from one time step to the next.

  11. Classification /Feature tracking • Classification Several pattern recognition methods can be used here, eg. • Euclidean Distance classifier: • KNN classifier: Find the K-Nearest Neighbor feature clusters in dataset t1 and dataset t2.

  12. Classification /Feature tracking • Output from Classification module I create a new class to preserve the output dataset from Classification module: class LabelTrack(). It preserve the information: • ViewID: camera positions, we will move camera around the object in order to restore 3D object. • timeID: time order, for each camera position , we will take several time- varying images • classID: class number after correspondence computation between a pair of images in time order • Label: the original region numbers before correspondence computaton • R, G, B: the color information for each pixel • Coordinate x, y: the 2D coordinate of the projection of 3D object. • Forward pointer: preserve the labeling information of the previous dataset • Backward pointer: preserve the labeling information of the next dataset

  13. Computation Time • The importance of computation time Size of database: 512*512*24*40(time orders)*N(camera positions) In [5], the resolution is 128^3 with the computation time: 40 minutes. In my project, I use 3 minutes for 512*512*24*40. Because this is the framework of the whole project, there are a lot of I/O operations to see the temporary results. My expectation is 1 minutes for each camera position finally.

  14. REFERENCES • [1] Snyder and Cowart, “An Iterative Approach to Region Growing”, IEEE transaction on PAMI, 1983 • [2] Wesley E.Snyder and Hairong Qi, “Machine Vision”, Cambridge • [3] Richard O.Duda, Peter Hart, David Stork, “Pattern Classification”, Prentice Hall • [4] Rafael Gonzalez, Richard Woods,”Digital Image Processing”, 2nd, Prentice Hall • [5] D.Silver, Xin Wang, ”volume tracking”, Visualization '96. Proceedings.27 Oct.-1 Nov. 1996

  15. Thanks Any questions?

More Related