On the interpolation algorithm ranking
Download
1 / 12

On the Interpolation Algorithm Ranking - PowerPoint PPT Presentation


  • 125 Views
  • Uploaded on

10th International Symposium on Spatial Accuracy Assessment in Natural Resources and Environmental Sciences from 10th to 13th July 2012, Florianópolis, SC, Brazil. On the Interpolation Algorithm Ranking. Carlos López-Vázquez LatinGEO – Lab SGM+Universidad ORT del Uruguay.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' On the Interpolation Algorithm Ranking' - maik


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
On the interpolation algorithm ranking

10th International Symposium on Spatial Accuracy Assessment in Natural Resources and Environmental Sciences from 10th to 13th July 2012, Florianópolis, SC, Brazil.

On the Interpolation Algorithm Ranking

Carlos López-Vázquez

LatinGEO – Lab

SGM+Universidad ORT del Uruguay


What is algorithm ranking
What is in Natural Resources and Environmental Sciences from 10th to 13th July 2012, Florianópolis, SC, Brazil. algorithm ranking?

  • There exist many interpolation algorithms

  • Which is the best?

    • Is there a general answer?

    • Is there an answer for my particular dataset?

    • How to define the better-than relation between two given methods?

    • How confident should I be regarding such answer?


What has been done
What has been done? in Natural Resources and Environmental Sciences from 10th to 13th July 2012, Florianópolis, SC, Brazil.

  • {A}

  • {B}

  • Many papers so far

  • Permanent interest

  • How is a typical paper?

    • Takes a dataset as an example

  • N points sampled somewhere

  • Subdivide N in two sets: Training Set {A} and Test Set {B}

    • A∩B=Ø; N=#{A}+#{B}

  • Repeat for all available algorithms:

    • Define interpolant using {A};

blindly interpolate at locations of {B}

  • Compare known values at {B}with those interpolated ones

  • Compare? Typically through RMSE/MAD

  • Better-Than is equivalent to lower-RMSE


Is rmse mad etc suitable as a metric
Is RMSE/MAD/etc. suitable as a metric? in Natural Resources and Environmental Sciences from 10th to 13th July 2012, Florianópolis, SC, Brazil.

  • Different interpolation algorithms lead to different look

  • RMSE might not be representative. Why?

  • Let’s consider spectral properties

Images from www.spatialanalysisonline.com


Some spectral metric of agreement
Some spectral metric of agreement in Natural Resources and Environmental Sciences from 10th to 13th July 2012, Florianópolis, SC, Brazil.

  • For example, ESAM metric

  • U=fft2d(measured error field), U(i,j)≥0

  • V=fft2d(interpolated error field), V(i,j)≥0

  • ideally, U=V

  • 0≤ESAM(U,V)≤1

  • ESAM(W,W)=1

Hint!: There might be better options than ESAM


How confident should i be regarding such answer
How confident should I be regarding such answer? in Natural Resources and Environmental Sciences from 10th to 13th July 2012, Florianópolis, SC, Brazil.

  • Given {A} and {B}a deterministic answer

  • How to attach a confidence level? Or just some uncertainty?

    • Perform Cross Validation (Falivene et al., 2010)

      • Set #{B}=1, and leave the rest with {A}

      • N possible choices (events) to select B

      • Evaluate RMSE for each method and event

    • Average for each method over N cases

    • Better-than is now Average-run-better-than

  • Simulate

    • Sample {A} from N, #{A}=m, m<N

    • Evaluate RMSE for each method and event, and create rank(i)

    • Select confidence level, and apply Friedman’s Test to all rank(i)

n wines judges each rank k different wines


The experiment
The experiment in Natural Resources and Environmental Sciences from 10th to 13th July 2012, Florianópolis, SC, Brazil.

  • DEM of Montagne Sainte Victoire (France)

  • Sample {B}, 20 points, held fixed

Apply six algorithms

Evaluate RMSE, MAD, ESAM, etc.

Evaluate ranking(i)

  • Evaluate ranking of means over i

  • Apply Friedman’s Test and compare

  • Do 250 times:

    Sample {A} points


Results
Results in Natural Resources and Environmental Sciences from 10th to 13th July 2012, Florianópolis, SC, Brazil.

  • Ranking using mean of simulated values might be different from Friedman’s test

  • Ranking using spectral properties might disagree with that of RMSE/MAD

  • Friedman’s Test has a sound statistical basis

  • Spectral properties of the interpolated field might be important for some applications


Thank you! in Natural Resources and Environmental Sciences from 10th to 13th July 2012, Florianópolis, SC, Brazil.

Questions?


Results1
Results in Natural Resources and Environmental Sciences from 10th to 13th July 2012, Florianópolis, SC, Brazil.

  • Other results, valid for this particular dataset

    • Ranking using ESAM varies with #{A}

    • According to ESAM criteria, Inverse Distance Weighting (IDW) quality degrades as #{A} increases

    • According to RMSE criteria, IDW is the best

      • With a significative difference w.r.t. the second

      • With 95% confidence level

      • Irrespective of #{A}

    • According to ESAM criteria, IDW is NOT the best


Other possible spectral metrics to be developed
Other possible spectral metrics (to be developed) in Natural Resources and Environmental Sciences from 10th to 13th July 2012, Florianópolis, SC, Brazil.


ad