1 / 8

evaluation

evaluation. a erosol CCI retrievals Frascati 2013. the choice (?). AATSR F v142 AATSR O v202 AATSR S v040 MERIS A v21 MERIS B v11 MERIS E 802 PARASOL v30 MODIS_aqua c5.1 MODIS_terra c5.1 SEAWIFS MISR v31.

ion
Download Presentation

evaluation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. evaluation aerosol CCI retrievals Frascati 2013

  2. the choice (?) • AATSR F v142 • AATSR O v202 • AATSR S v040 • MERIS A v21 • MERIS B v11 • MERIS E 802 • PARASOL v30 • MODIS_aquac5.1 • MODIS_terrac5.1 • SEAWIFS • MISR v31 • ATSR Swansey algorithm was chosen for multi-year processing (1997-2010) • how much improvement is this retrieval compared to the start of the project? • are there still issues with this retrieval (and where)?

  3. the scoring concept • establish 1x1 gridded AERONET daily data and compare to daily level 3 satellite data • compare (by sub-region) • local time-series (bias, temp. correlation) • daily pattern (bias, spatial correlation) • combine average of the two bias scores with scores for temporal and spatial correlation • combine regional scores into one global score

  4. progress – swansey 4.0 • ATSR Swansey 4.0 errors are smaller than at the start of project ... compared to AERONET spatial error diff overall error diff temporal error diff bias error diff  better scores worse scores 

  5. progress – swansey 4.0 • ATSR Swansey 4.0 errors are NOT smaller than at start of project over oceans - comp to MODIS spatial error diff overall error diff bias error diff temporal error diff  better scores worse scores 

  6. progress – s-4.0 / f-1.42 / o-202 Swansey 4.0 Finland 1.42 Oxford 202  better scores worse scores 

  7. (more) questions … • why is the over ocean performance so poor over oceans (got lost because there are no or not sufficient reference data over oceans) ? • the MODIS reference does not seem the cause since results with POLDER as ref. are similar • looking at sub-scores … the new Swansey bias is actually improved better but spatial and temporal correlation is poorer … why? • is there a processing problem (day offset)

  8. further thoughts • how quickly can the new Swanseyscores over oceans be improved ? • cloud-screening ? • wind-speed dependence of surf reflectance ? • In case of failure are there alternate fall-back options for the 13 year reprocessing • using the “old” Swansey retrieval over ocean ? • using the Finland retrieval over ocean ?

More Related