semantic feature analysis in raster maps
Download
Skip this Video
Download Presentation
Semantic feature analysis in raster maps

Loading in 2 Seconds...

play fullscreen
1 / 49

Semantic feature analysis in raster maps - PowerPoint PPT Presentation


  • 76 Views
  • Uploaded on

Semantic feature analysis in raster maps. Trevor Linton, University of Utah. Acknowledgements. Thomas Henderson Ross Whitaker Tolga Tasdizen The support of IAVO Research, Inc. through contract FA9550-08-C-005. Field of Study. Geographical Information Systems

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Semantic feature analysis in raster maps' - najwa


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
semantic feature analysis in raster maps

Semantic feature analysis in raster maps

Trevor Linton, University of Utah

acknowledgements
Acknowledgements
  • Thomas Henderson
  • Ross Whitaker
  • Tolga Tasdizen
  • The support of IAVO Research, Inc. through contract FA9550-08-C-005.
field of study
Field of Study
  • Geographical Information Systems
    • Part of Document Recognition and Registration.
  • What are USGS Maps?
    • A set of 55,000 – 1:24,000 scale images of the U.S. with a wealth of data.
  • Why study it?
    • To extract new information (features) from USGS maps and register information with existing G.I.S and satellite/aerial imagery.
problems
Problems
  • Degradation and scanning produces noise.
  • Overlapping features cause gaps.
  • Metadata has the same texture as features.
  • Closely grouped features makes discerning between features difficult.
problems noisy data
Problems – Noisy Data

Scanning artifact which introduces noise

problems overlapping features
Problems – Overlapping Features

Metadata and Features overlap with similar textures. Gaps in data.

problems closely grouped features
Problems – Closely Grouped Features

Closely grouped features make discerning features difficult.

thesis goals
Thesis & Goals
  • Using Gestalt principles to extract features and overcome some of the problems described.
  • Quantitatively extract 95% recall and 95% precision for intersections.
  • Quantitatively extract 99% recall and 90% precision for intersections.
  • Current best method produces 75% recall and 84% precision for intersections.
approach
Approach
  • Gestalt Principles
    • Organizes perception, useful for extracting features.
    • Law of Similarity
    • Law of Proximity
    • Law of Continuity
approach gestalt principles
Approach – Gestalt Principles
  • Law of Similarity
    • Grouping of similar elements into whole features.
    • Reinforced withhistogram models.
approach gestalt principles1
Approach – Gestalt Principles
  • Law of Proximity
    • Spatial proximity of elementsgroups them together.
    • Reinforced through TensorVoting System
approach gestalt principles2
Approach – Gestalt Principles
  • Law of Continuity
    • Features with small gaps should be viewed as continuous.
    • Idea of multiple layers offeatures that overlap.
    • Reinforced by Tensor VotingSystem.
pre processing
Pre-Processing
  • Class Conditional Density Classifier
    • Uses statistical meansand histogrammodels.
    • μ = Histogram modelvector.
    • Find class k with thesmallest δ is the classof x.
pre processing1
Pre-Processing
  • k-Nearest Neighbors
    • Uses the class that is found most often out of k closest neighbors in the histogram model.
    • Closeness is defined by Euclidian distance of the histogram models.
pre processing2
Pre-Processing
  • Knowledge Based Classifier
    • Uses logic that is based on our knowledge of the problem to determine classes.
    • Based on information on the textures each class has.
pre processing3
Pre-Processing
  • Original Image with Features Estimated
pre processing4
Pre-Processing
  • Original Image with Roads Extracted

Class condition classifier k-Nearest Neighbors Knowledge Based

tensor voting system1
Tensor Voting System
  • Uses an idea of “Voting”
    • Each point in the image is a tensor.
    • Each point votes how other points should be oriented.
  • Uses tensors as mathematical representations of points.
    • Tensors describe the direction of the curve.
    • Tensors represent confidence that the point is a curve or junction.
    • Tensors describe a saliency of whether the feature (whether curve or junction) actually exists.
tensor voting system2
Tensor Voting System
  • What is a tensor?
    • Two vectors that are orthogonal to one another packed into a 2x2 matrix.
tensor voting system3
Tensor Voting System
  • Creating estimates of tensors from input tokens.
    • Principal Component Analysis
    • Canny edge detection
    • Ball Voting
tensor voting system4
Tensor Voting System
  • Voting
    • For each tensor in the sparse field
      • Create a voting field based on the sigma parameter.
      • Align the voting field to the direction of the tensor.
      • Add the voting field to the sparse field.
    • Produces a dense voting field.
tensor voting system5
Tensor Voting System
  • Voting Fields
    • A window size is calculated from
    • Direction of each tensor in the field is calculated from
    • Attenuation derived from
tensor voting system6
Tensor Voting System
  • Voting Fields (Attenuation)
    • Red and yellow are higher votes, blue and turquoise lower.
    • Shape related to continuation vs. proximity.
tensor voting system7
Tensor Voting System
  • Extracting features from dense voting field.
    • determines the likelihood of being on a curve.
    • determines the likelihood of being a junction.
    • If both λ1 and λ2 are small then the curve or junction has a small amount of confidence in existing or being relevant.
tensor voting system8
Tensor Voting System
  • Extracting features from dense voting field.

Original Image Curve Map Junction Map

post processing
Post-processing
  • Extracting features from curve map and junction map.
    • Global Threshold and Thinning
    • Local Threshold and Thinning
    • Local Normal Maximum
    • Knowledge Based Approach
post processing1
Post-processing
  • Global threshold on curve map.

Applied Threshold Thinned Image

post processing2
Post-processing
  • Local threshold on curve map.

Applied Threshold Thinned Image

post processing3
Post-processing
  • Local Normal Maximum
    • Looks for maximum over the normal of the tensor at each point.

Applied Threshold Thinned Image

post processing4
Post-processing
  • Knowledge Based Approach
    • Uses knowledge of types of artifacts of the local threshold to clean and prep the image.

Original Image Knowledge Based Approach

experiments
Experiments
  • Determine adequate parameters.
  • Identify weaknesses and strengths of each method.
  • Determine best performing methods.
  • Quantify the contributions of tensor voting.
  • Characterize distortion of methods on perfect inputs.
  • Determine the impact of misclassification of text on roads.
experiments1
Experiments
  • Quantitative analysis done with recall and precision measurements.
    • Relevant is the set of all features that are in the ground truth.
    • Retrieved is the set of is all features found by the system.
    • tp = True Positive, fn = False Negative, fp = False Positive
    • Recall measures the systems capability to find features.
    • Precision characterizes whether it was able to find only those features.
    • For both recall and precision, 100% is best, 0% is worst.
experiments2
Experiments
  • Data Selection
    • Data set must be large enough to adequately represent features (above or equal to 100 samples).
    • One sub-image of the data must not be biased by the selector.
    • One sub-image may not overlap another.
    • A sub-image may not be a portion of the map which contains borders, margins or the legend.
experiments3
Experiments
  • Ground Truth
    • Manually generated from samples.
    • Roads and intersections manually identified.
    • Ground Truth is generated twice, those with more than 5% of a difference are re-examined for accuracy.

Ground truth Original Image

experiments4
Experiments
  • Best Pre-Processing Method
    • All pre-processing methods examined without tensor voting or post processing for effectiveness.
    • Best window size parameter for k-Nearest Neighbors was qualitatively found to be 3x3.
    • The best k parameter for k-Nearest Neighbors was quantitatively found to be 10.
    • The best pre-processing method found was the Knowledge Based Classifier
experiments5
Experiments
  • Tensor Voting System
    • Results from test show the best value for σis between 10 and 16 with little difference in performance.
experiments6
Experiments
  • Tensor Voting System
    • Contributions from tensor voting were mixed.
      • Thresholding methods performed worse.
      • Knowledge based method improved 10% road recall, road precision dropped by 2%, intersection recall increased by 22% and intersection precision increased by 20%.
experiments7
Experiments
  • Best Post-Processing
    • Finding the best window size for local thresholding.
    • Best parameter was found between 10 and 14.
experiments8
Experiments
  • Best Post-Processing
    • The best post-processing method was found by using a naïve pre-processing technique and tensor voting.
    • Knowledge Based Approach performed the best.
experiments9
Experiments
  • Running the system on perfect data (ground truth as inputs) produced higher results then any other method (as expected).
  • Thesholding had a considerably low intersection precision due to artifacts produced in the process.
experiments10
Experiments
  • Best combination found was k-Nearest Neighbors with a Knowledge Based Approach.
    • Note the best pre-processing method Knowledge Based Classifier was not the best pre-processing method when used in combinations due to the type of noise it produces.
    • With Text:
      • 92% Road Recall, 95% Road Precision
      • 82% Intersection Recall, 80% Intersection Precision
    • Without Text:
      • 94% Road Recall, 95% Road Precision
      • 83% Intersection Recall, 80% Intersection Precision
experiments11
Experiments
  • Confidence Intervals (95% CI, 100 samples)
    • Road Recall:
      • Mean: 93.61% CI [ 92.47% , 94.75% ] ± 0.14%
    • Road Precision:
      • Mean: 95.23% CI [ 94.13% , 96.33% ] ± 0.10%
    • Intersection Recall:
      • Mean: 82.22% CI [ 78.91% , 85.51% ] ± 3.29%
    • Intersection Precision:
      • Mean: 80.1% CI [ 76.31% , 82.99% ] ± 2.89%
experiments12
Experiments
  • Adjusting parameters dynamically
    • Dynamically adjusting the σ between 4 and 10 by looking at the amount of features in a window did not produce much difference in the recall and precision (less than 1%).
    • Dynamically adjusting the c parameter in tensor voting actually produced worse results because of exaggerations in the curve map due to slight variations in the tangents for each tensor.
future work issues
Future Work & Issues
  • Tensor Voting and thinning tend to bring together intersections too soon when the road intersection angle was too low or the roads were too thick.
  • The Hough transform may possibly overcome this issue.
future work issues1
Future Work & Issues
  • Scanning noise will need to be removed in order to produce high intersection recall and precision results.
future work issues2
Future Work & Issues
  • Closely grouped and overlapping features.
future work issues3
Future Work & Issues
  • Developing other pre-processing and post-processing techniques.
    • Learning algorithms
    • Various local threshold algorithms
    • Road following algorithms
ad