1 / 71

Evaluation of year 2004 monthly GlobAER aerosol products

Evaluation of year 2004 monthly GlobAER aerosol products. Stefan Kinne. the task. an evaluation of GlobAER 2004 global maps for aerosol optical depth (info on amount) for Angstrom parameter (info on size) by ASTR (dual view, global, at best 10 per month)

clodia
Download Presentation

Evaluation of year 2004 monthly GlobAER aerosol products

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluationof year 2004 monthly GlobAERaerosol products Stefan Kinne

  2. the task • an evaluation of • GlobAER 2004 global maps • for aerosol optical depth (info on amount) • for Angstrom parameter (info on size) • by ASTR (dual view, global, at best 10 per month) • by MERIS (nadir view, global, at best 1 per day) • by SEVIRI (nadir view, regional, at best 30 per day) • by a merged (ATSR / MERIS / SEVIRI) composite

  3. the questions • how well do the data compare to trusted data references (e.g. AERONET) ? • how well do the data compare to existing data sets – even for same sensor data ? • can the performance be quantified ? • more specifically … • what are the scores of a new (outlier-resistant) method … examining • bias, spatial and temporal variability ?

  4. the investigated properties • aerosol optical depth (AOD) • extinction along a (vertical) direction due to scattering and absorption by aerosol • here for the entire atmosphere • here for the mid-visible (0.55mm wavelength) • Angstrom parameter (Ang) • spectral dependence of AOD in the visible spectrum • small dependence (Ang ~ 0) a aerosol > 1mm size • strong decrease (Ang > 1.2) a aerosol < 0.5mm size

  5. GlobAER 2004 maps GAa - ATSR GAs - SEVIRI GAm - MERIS GAx – merged other multi-ann. maps med – model median clim – med & aer(sun) sky – med & aer(sky) TO – TOMS other 2004 maps ATs - ATSR Swansea SEb- SEVIRI Bruxelles Mdb - MODIS deep blu MO - MODIS std coll.5 MI – MISR version 22 Ag – AVHRR, GACP Ap – AVHRR, Patmos Aer – AERONET 2004 monthly data-sets

  6. AOD map comparisons • AOD annual maps • all available data • AOD seasonal maps • ATSR GlobAER vs Swansey • SEVIRI GloabAER vs RUIB-Bruexelles • difference maps … to a remote sensing ‘best’ composite • all available data-sets • focus on the four GloabAER products

  7. AOD – 2004 annual maps

  8. ATSR / SEVIRI – seasonal AOD

  9. AOD diff to ‘composite’  underestimates overestimates 

  10. quick (annual) AOD check • ATSR • underestimates in dust regions • overestimate in biomass regions • SEVIRI • severe biomass overestimates • useful over-land estimates ? • MERIS • apparent land snow cover issue • merged • not the envisioned improvement  ‘-’ ’+’ 

  11. Angstrom map comparisons • Angstrom annual maps • all available data • Angstrom seasonal maps • ATSR GlobAER vs Swansey • SEVIRI GloabAER vs RUIB-Bruexelles • difference maps … to a climatology (model & AERONET) • all available data-sets • focus on the four GloabAER products

  12. Angstrom – 2004 annual maps

  13. ATSR / SEVIRI – seasonal Angstr.

  14. Angstrom diff to ‘climatology’  underestimates overestimates 

  15. quick annual Angstrom check • ATSR • underestimates in tropics • overestimates in south. oceans • SEVIRI • underestimates over oceans • strong overestimates over land • MERIS • overestimates over land • merged • not the envisioned improvement  ‘-’ ’+’ 

  16. the SCORING challenge • quantify data performance by one number • develop a score such that contributing errors to be traceable back to • bias • spatial correlation • temporal correlation • spatial sub-scale (e.g. region) • temporal sub-scale (e.g. month, day) • make this score outlier resistant

  17. one number ! - 0.504

  18. info on overall bias - 0.504 sign of the bias

  19. | 1 | is perfect …. 0 is poor - 0.504 sign of the bias the closer to absolute 1.0 … the better

  20. product of sub-scores - 0.504 = 0.9 *- 0.7 * 0.8 temporal correlation sub-score bias sub- score spatial correlation sub-score the closer to absolute 1.0 … the better sign of the bias

  21. spatial stratification - 0.504 = 0.9 * -0.7 * 0.8 overall score time score bias score spatial score regional surface area weights spatial sub-scale scores TRANSCOM regions

  22. temporal stratification - 0.504 = 0.9 * -0.7 * 0.8 overall score time score bias score spatial score spatial sub-scale scores averaging in time instantaneous median data temporal sub-scale scores (e.g. month or days)

  23. sub-score definition • each sub-score S is defined • by an error eand • by an error weight w 0.9 * -0.7 * 0.8 S = 1 – w* e time score S bias score S spatial score S spatial sub-scale scores instantaneous median data temporal sub-scale scores (e.g. month or days)

  24. definition of errors e • S = 1 – w* e • all values for the errors e are rank - based for “time score” and “spatial score” rank correlation coefficients for data pairs are determined • e, correlation = (1- rank_correlation coeff.) /2 (correlated: e = 0, anti-correlated: e = 1) time score bias score spatial score

  25. definition of errors e • S = 1 – w* e for “bias score” all all data-pairs are placed in a single array and ranked by value then ranks are separated according to data origin, summed and (rank-sums are) compared • e, bias= (sum1 – sum2) / (sum1 + sum2) (strong neg.bias e = -1, strong pos.bias e= +1) an example (“how does the rank bias error work ?”) • set 1: 1 7 8 value: 98 7431 rank-sum 1: 11 • set 2: 3 4 9 rank: 1 2 3 4 5 6 rank-sum 2: 10 e = (1-2)/(1+2) = (11-10)/21 ~zero a no clear bias time score bias score spatial score

  26. definition of error weight w • S = 1 – w* e • wis a weight factor based on the inter-quartile range / median ratio • w = (75%pdf - 25%pdf) / 50%pdf … but not larger than 1.0 (w<1.0) simply put … if there is no variability an error does not matter time score bias score spatial score

  27. scoring summary • one single score … • … without sacrificing spatial and temporal detail ! • stratification into error contribution from • bias • spatial correlation • temporal correlation • robustness against outliers • still … just one of many possible approaches • now to some applications …

  28. questions • how did GlobAER products score? • overall ? • seasonality ? • spatial correlation ? • bias ? • in what regions ? • in what months ? • how did scores place to other retrievals … • with the same sensor (for the same year 2004) • with other sensors (for the same year 2004)

  29. GlobAER 2004 maps GAa - ATSR GAs - SEVIRI GAm - MERIS GAx – merged other multi-ann. maps med – model median clim – med & aer(sun) sky – med & aer(sky) TO – TOMS other 2004 maps ATs - ATSR Swansea SEb- SEVIRI Bruxelles Mdb - MODIS deep blu MO - MODIS std coll.5 MI – MISR version 22 Ag – AVHRR, GACP Ap – AVHRR, Patmos Aer – AERONET 2004 monthly data-sets

  30. vs sun-photometry year 2004 - AOD TOTAL seas bias corr GAa .47 .81 .81 .72 GAm .43 .73 .82 .72 GAx .46 .77 .81 .75 GAs -- -- -- -- ATs .57 .88 .86 .75 SEb -- -- -- -- clim .72 .94 .88 .87 MISR .59 .90 .87 .76 MOD .65 .92 .88 .80 best .69 .91 .88 .87 year 2004 - Angstrom TOTAL seas bias corr GAa -.57 .83 -.88 .79 GAm .41 .73 .83 .66 GAx .55 .84 .87 .75 GAs -- -- -- -- ATs -.49 .77 -.87 .73 SEb -- -- -- -- clim .79 .94 .93 .90 MISR .67 .92 .90 .81 med -.59 .81 -.87 .83 MISR .62 .90 .90 .77 ann global scores – AOD / Angstrom

  31. year 2004 – AOD (aeronet) TOTAL seas bias corr GAa .47 .81 .81 .72 GAm .43 .73 .82 .72 GAx .46 .77 .81 .75 GAs -- -- -- -- ATs .57 .88 .86 .75 SEb -- -- -- -- clim .72 .94 .88 .87 MISR .59 .90 .87 .76 MOD .65 .92 .88 .80 best .69 .91 .88 .87 year 2004 – AOD (climat.) TOTAL seas bias corr GAa .47 .86 .80 .69 GAm .37 .77 .83 .58 GAx .41 .82 .77 .66 GAs -- -- -- -- ATs .47 .85 .80 .69 SEb -- -- -- -- aer -.72 .94 -.88 .87 MISR .51 .89 .80 .71 MOD .56 .87 .85 .75 best .65 .89 .87 .84 annual AOD scores – diff refs

  32. vs sun-photometry ATSR AOD - regional errors / data

  33. vs sun-photometry merged AOD - regional errors / data

  34. vs sun-photometry ATSR-s AOD - regional errors / data

  35. year 2004 – land AOD TOTAL seas bias corr GAa -.38 .72 .82 .64 GAm .42 .73 .80 .72 GAx .36 .64 .82 .70 GAs -- -- -- -- ATs .55 .88 .88 .72 SEb -- -- -- -- clim .76 .95 .91 .89 MISR -.63 .92 -.89 .77 MOD -.59 .92 -.85 .75 best .68 .92 .87 .85 year 2004 – ocean AOD TOTAL seas bias corr GAa .52 .85 .80 .76 GAm .43 .73 .83 .71 GAx .53 .86 .80 .77 GAs -- -- -- -- ATs .58 .89 .85 .77 SEb -- -- -- -- clim .69 .92 .87 .86 MISR .57 .89 .86 .75 MOD .70 .93 .90 .84 best .70 .90 .88 .88 ann. AOD scores – land/ocean

  36. vs sun-photometry ATSR AOD – temporal total errors

  37. vs sun-photometry monthly / regional ‘error’- change ATSR (GlobAer) minus ATSR (Swan) : Dtotal AOD error (=1-|S|)  improvement deteriation 

  38. vs sun-photometry monthly / regional ‘error’- change SEVIRI (GlobAer) vs SEVIRI (Brux) : Dtotal AOD error (=1-|S|)  improvement deteriation 

  39. AOD summary • ATSR by GlobAer • poorer than MODIS, MISR and even ATSR-s • stronger deductions over land than oceans • ocean scores are usually better than land scores • low bias over land, high bias over oceans • errors are larger for the northern hemisphere • MERIS by GlobAer • poorer than ATSR … and also the ‘merged’

  40. vs sun-photometry ATSR Angstrom - regional errors

  41. year 2004 – land Angstr TOTAL seas bias corr GAa .54 .79 .88 .78 GAm .39 .73 .80 .65 GAx .50 .79 .83 .76 GAs -- -- -- -- ATs -.44 .72 -.85 .73 SEb -- -- -- -- clim .82 .97 .93 .91 MISR -.66 .91 -.90 .81 med -.66 .91 -.90 .81 year 2004 – ocean Angstr TOTAL seas bias corr GAa -.58 .85 -.87 .78 GAm -- -- -- -- GAx -.58 .87 -.90 .75 GAs -- -- -- -- ATs .53 .81 .89 .74 SEb -- -- -- -- clim .76 .93 .93 .88 MISR .68 .93 .91 .81 med -.54 .75 -.85 .84 ann. Ang scores – land/ocean

  42. vs sun-photometry ATSR Angstr. – temporal total errors

  43. vs sun-photometry monthly / regional ‘error’- change ATSR (GlobAer) vs ATSR (Swansey) : Dtotal Ang. error (=1-|S|)  improvement deteriation 

  44. Angstrom summary • ATSR by GlobAer (model-based !) • poorer than MODIS, MISR, better than ATSR-s • ocean scores are slightly above land scores • (ocean scores are usually better than land scores) • high bias over land, low bias over oceans • higher errors during (continental) summers • no benefits from merged products • Lack of MERIS Angstrom data over oceans

  45. outlook • focus should be on (long-term) ATSR • still improvement needed • Angstrom constrain should help • collaborate with Swansey • merged data is conceptually interesting • … but limited by the poorest link (MERIS) • if using diff sensors …use the same model !

  46. extras

  47. AOD ATSR – GlobAER 2004

  48. AOD SEVIRI – GlobAER 2004

  49. AOD MERIS – GlobAER 2004

  50. AOD merged – GlobAER 2004

More Related