analysis of scores datasets and models in visual saliency modeling n.
Download
Skip this Video
Download Presentation
Analysis of scores, datasets, and models in visual saliency modeling

Loading in 2 Seconds...

play fullscreen
1 / 19

Analysis of scores, datasets, and models in visual saliency modeling - PowerPoint PPT Presentation


  • 157 Views
  • Uploaded on

Analysis of scores, datasets, and models in visual saliency modeling. Ali Borji, Hamed R. Tavakoli, Dicky N. Sihite, and Laurent Itti,. Toronto dataset. Toronto dataset. Toronto dataset. Toronto dataset. Toronto dataset. Visual Saliency. Why important? Current status

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Analysis of scores, datasets, and models in visual saliency modeling' - umed


Download Now An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
analysis of scores datasets and models in visual saliency modeling
Analysis of scores, datasets, and models in visual saliency modeling
  • Ali Borji, Hamed R. Tavakoli, Dicky N. Sihite, and Laurent Itti,
visual saliency
Visual Saliency
  • Why important?
  • Current status
  • Methods: numerous / 8 categories (Borji and Itti, PAMI, 2012)
  • Databases:
  • Measures:
    • scan-path analysis
    • correlation based measures
    • ROC analysis

How good my method works?

benchmarks
Benchmarks
  • Judd et al. http://people.csail.mit.edu/tjudd/SaliencyBenchmark/
  • Borji and Itti https://sites.google.com/site/saliencyevaluation/
  • Yet another benchmark!!!?
dataset challenge

Toronto

MIT

Le Meur

Dataset Challenge
  • Dataset bias :
  • Center-Bias (CB),
        • Border effect
  • Metrics are affected by these phenomena.
tricking the metric
Tricking the metric

Solution ?

  • sAUC
  • Best smoothing factor
  • More than one metric
the feature crises

Features

Low level

High level

people

car

intensity

color

symmetry

orientation

signs

depth

text

size

The Feature Crises

Does it capture any semantic scene property or affective stimuli?

Challenge of performance on stimulus categories

&

affective stimuli

the benchmark predicting scanpath

aAdDbBcCaA

aAcCaA

aAcCbBcCaAaA

….

The Benchmarkpredicting scanpath

aAbBcCaA

aA

dD

bB

cC

bBbBcC

matching score

lessons learned
Lessons learned
  • We recommend using shuffled AUC score for model evaluation.
  • Stimuli affects the performance .
  • Combination of saliency and eye movement statistics can be used in category recognition.
  • There seems the gap between models and IO is small (though statistically significant). It somehow alerts the need for new dataset.
  • The challenge of task decoding using eye statistics is open yet.
  • Saliency evaluation scores can still be introduced