On the interpolation algorithm ranking
This presentation is the property of its rightful owner.
Sponsored Links
1 / 12

On the Interpolation Algorithm Ranking PowerPoint PPT Presentation


  • 92 Views
  • Uploaded on
  • Presentation posted in: General

10th International Symposium on Spatial Accuracy Assessment in Natural Resources and Environmental Sciences from 10th to 13th July 2012, Florianópolis, SC, Brazil. On the Interpolation Algorithm Ranking. Carlos López-Vázquez LatinGEO – Lab SGM+Universidad ORT del Uruguay.

Download Presentation

On the Interpolation Algorithm Ranking

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


On the interpolation algorithm ranking

10th International Symposium on Spatial Accuracy Assessment in Natural Resources and Environmental Sciences from 10th to 13th July 2012, Florianópolis, SC, Brazil.

On the Interpolation Algorithm Ranking

Carlos López-Vázquez

LatinGEO – Lab

SGM+Universidad ORT del Uruguay


What is algorithm ranking

What is algorithm ranking?

  • There exist many interpolation algorithms

  • Which is the best?

    • Is there a general answer?

    • Is there an answer for my particular dataset?

    • How to define the better-than relation between two given methods?

    • How confident should I be regarding such answer?


What has been done

What has been done?

  • {A}

  • {B}

  • Many papers so far

  • Permanent interest

  • How is a typical paper?

    • Takes a dataset as an example

  • N points sampled somewhere

  • Subdivide N in two sets: Training Set {A} and Test Set {B}

    • A∩B=Ø; N=#{A}+#{B}

  • Repeat for all available algorithms:

    • Define interpolant using {A};

blindly interpolate at locations of {B}

  • Compare known values at {B}with those interpolated ones

  • Compare? Typically through RMSE/MAD

  • Better-Than is equivalent to lower-RMSE


Is rmse mad etc suitable as a metric

Is RMSE/MAD/etc. suitable as a metric?

  • Different interpolation algorithms lead to different look

  • RMSE might not be representative. Why?

  • Let’s consider spectral properties

Images from www.spatialanalysisonline.com


Some spectral metric of agreement

Some spectral metric of agreement

  • For example, ESAM metric

  • U=fft2d(measured error field), U(i,j)≥0

  • V=fft2d(interpolated error field), V(i,j)≥0

  • ideally, U=V

  • 0≤ESAM(U,V)≤1

  • ESAM(W,W)=1

Hint!: There might be better options than ESAM


How confident should i be regarding such answer

How confident should I be regarding such answer?

  • Given {A} and {B}a deterministic answer

  • How to attach a confidence level? Or just some uncertainty?

    • Perform Cross Validation (Falivene et al., 2010)

      • Set #{B}=1, and leave the rest with {A}

      • N possible choices (events) to select B

      • Evaluate RMSE for each method and event

    • Average for each method over N cases

    • Better-than is now Average-run-better-than

  • Simulate

    • Sample {A} from N, #{A}=m, m<N

    • Evaluate RMSE for each method and event, and create rank(i)

    • Select confidence level, and apply Friedman’s Test to all rank(i)

n wines judges each rank k different wines


The experiment

The experiment

  • DEM of Montagne Sainte Victoire (France)

  • Sample {B}, 20 points, held fixed

Apply six algorithms

Evaluate RMSE, MAD, ESAM, etc.

Evaluate ranking(i)

  • Evaluate ranking of means over i

  • Apply Friedman’s Test and compare

  • Do 250 times:

    Sample {A} points


Results

Results

  • Ranking using mean of simulated values might be different from Friedman’s test

  • Ranking using spectral properties might disagree with that of RMSE/MAD

  • Friedman’s Test has a sound statistical basis

  • Spectral properties of the interpolated field might be important for some applications


On the interpolation algorithm ranking

Thank you!

Questions?


Results1

Results

  • Other results, valid for this particular dataset

    • Ranking using ESAM varies with #{A}

    • According to ESAM criteria, Inverse Distance Weighting (IDW) quality degrades as #{A} increases

    • According to RMSE criteria, IDW is the best

      • With a significative difference w.r.t. the second

      • With 95% confidence level

      • Irrespective of #{A}

    • According to ESAM criteria, IDW is NOT the best


Other possible spectral metrics to be developed

Other possible spectral metrics (to be developed)


  • Login