GEWEX Aerosol Panel:
This presentation is the property of its rightful owner.
Sponsored Links
1 / 17

Comments on this presentation: [email protected] PowerPoint PPT Presentation


  • 72 Views
  • Uploaded on
  • Presentation posted in: General

GEWEX Aerosol Panel: A critical review of the efficacy of commonly used aerosol optical depth retrieval. GEWEX Panel: Sundar Christopher, Richard Ferrare, Paul Ginoux, Stefan Kinne, Gregory G. Leptoukh, Jeffrey Reid, Paul Stackhouse

Download Presentation

Comments on this presentation: [email protected]

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Comments on this presentation jeffrey reid nrlmry navy mil

GEWEX Aerosol Panel:A critical review of the efficacy of commonly used aerosol optical depth retrieval

GEWEX Panel: Sundar Christopher, Richard Ferrare, Paul Ginoux, Stefan Kinne, Gregory G. Leptoukh, Jeffrey Reid, Paul Stackhouse

Programmatic support: Hal Maring, Charles Ichoku, Bill Rossow

Comments on this presentation: [email protected]


Bottom line up front

Since our Last Briefing (Aug 2010), the GEWEX Aerosol Assessment Panel (GAAP) has nearly completed the first report.

By late October we expect to send sections to the instrument science teams for “fact checking.”

Report will layout the nature of the aerosol problem, with a synopsis of the literature and commentary on verification methods and findings.

Phase 2-detailed independent evaluation, will not start until MODIS collection 6 and MISR v 23 is officially released.

bottom line up front


The aerosol problem

The aerosol field has recently grown exponentially, with literally dozens of both products and applications.

But, most products are in the twilight zones of “research,” “development” and “production.”

This is reinforced with the funding situation where money for product development, maintenance, and verification is limited. Developers spend more time “using” than “supporting” their products.

By the time the wider community figures out how a product is doing, a new version is released.

Situation: Confusion and some rancor in the community as to the actual efficacy and appropriate application of these data sets

Response: Reform the GEWEX Aerosol Panel (GAP)

the aerosol problem


The gap mandate

NASA HQ wished the development of a comprehensive evaluation of the current state-of-the science performed within the GEWEX framework.

Team is to be small and well rounded. Team size is to be no larger than what is necessary (e.g., no large committees).

Members are to be ‘jurors drawn from accomplished peers’ to examine the 7 most common global products: AVHRR (GACP and NOAA), MISR, MODIS (Standard & Deep Blue), OMI, and POLDER.

the GAP mandate


The gap mandate1

Phase 1: Perform comprehensive literature review and evaluation. Deliverable will be a report on the state-of the science, the application of satellite aerosol data, and the identification of shortcomings, and broad recommendations to the field for future development and verification needs.

Phase 2: Based on Phase1 examine in detail specific issues in the generation of retrieval and gridded products.

Peer review of the peer review: Findings given to teams for comment before release. After an iteration, team rebuttals can be made public record.

Where are we now? Nearing the end of phase 1. Report is 100+ pages and growing…

the GAP mandate


Customers and issues

Often aerosol products are thought of as climate products.

However, all of the world’s major Numerical Weather Prediction (NWP) have aerosol assimilation programs and aerosol data has worked its way into numerous applications.

There are aerosol observability issues

reliable and timely delivery

bias (contextual, sampling) understanding / removal

error characterization (essential for assimilations)

customers and issues


Opinion

opinion

data product evaluation, validation, and verification.

Reliable and timely delivery of satellite aerosol and fire products is only half the challenge. If products are to be integrated, then biases need to be removed through careful product evaluation and verification. Contextual & sampling biases need to be understood. Because of possible degradation in model performance through data assimilation, aerosol product error characterization has been emphasized more in the operations than climate communities. Indeed,despite popular misconceptions, operational data characterization requirements are often more strict than what is commonly used in the climate research community.

Reid, J. S., Benedetti, A., Colarco, P. R., Hansen, J.A., 2011. International operational aerosol observability workshop, Bulletin of the American Meteorological Society, 92, Issue 6, pp. ES21-ES24 doi: 10.1175/2010BAMS3183.1


Panel members hq hal maring and charles ichoku

Sundar Christopher (UAH): chair, algorithm development, multi sensor products

Richard Ferrare (NASA LaRC): lidar, field work, multi-sensor products

Paul Ginoux (NOAA GFDL): Global modeling and aerosol sources

Stefan Kinne (Max Plank): GEWEX Cloud, AEROCOM

Gregory Leptoukh (NASA GSFC): Level 3 product development and distribution

Jeffrey Reid (NRL): co-chair, aerosol observability, field work, verification, operational development

Paul Stackhouse: GEWEX radiation, atmospheric radiation and energetics.

panel membersHQ: Hal Maring and Charles Ichoku


Report outline

Introduction

Nature of the Problem

Overview of Assessed Satellite Products

Evaluation of Verification and Intercomparison Studies

Phase 1 Synopsis and Recommendations

report outline


Relative levels of efficacy required approximate and not meant to offend

relative levels of efficacy required(Approximate and not meant to offend…)

Studies

Imagery/ Contextual

“Advantage of Human Eye”

Seasonal Climatology

Basically want to know were stuff is. Can do one-up corrections

Model Aps, V&V, Inventory

Have stronger time constraints and need spatial bias elimination.

Data Assimilation

Quantify bias & uncertainty everywhere and correct where you can.

Parametric Modeling and Lower Order Process Studies

Correlations de-emphasize bias

Trend Climatology

Need to de-trend biases in retrieval and in sampling

Higher Order Process Study

Push multi-product and satellite data

V&V statistics must speak to these applications!

Hence, there is no “one size fits all” error parameter. Sorry….


Bias examples 1 global average time series over water

bias examples (1)global average, time-series over water

differences are a mix of radiometric bias

cloud bias

microphysical bias

sampling differences

contextual bias.

satellite retrievals

tend to overestimate

AOD (at low AOD)

over oceans especially MISR

Zhang and Reid, 2010

global AOD difference between sensors

Mischenko et al., 2007


Bias examples 2 more

bias examples (2)more

ASO clear sky bias, Zhang and Reid 2009

  • consideration of “what the satellite actually sees” is often overlooked

  • basic matchup between sensors is not trivial.

  • core retrieval biases related to clouds, lower boundary condition and microphysics are non-random, and spatially / temporally correlated

MODIS vs.

AERONET

- slope -


Diagnostic versus prognostic error models modis over ocean example

diagnostic versus prognostic error modelsMODIS over ocean example

RMSE (MODIS,AERONET)

RMSE (MODIS,AERONET)

worse here

better here

If we knew AOD then we would not need MODIS. All we have is MODIS’s own estimate of AOD….

From Shi et al., 2010, ACP

AERONET AOD

MODIS AOD


Verification 1

Since you can measure AOD, all aerosol science projects drive to validate to AOD, whether it is appropriate or not.

There is no shortage of validation studies. But, they tend to be direct regression based, have important details missing, and are conducted over limited periods of times and/or locations. Hence, they tend to be of limited utility.

While there are many cases of satellite cal-val components from field missions, analyses are usually not repeated for new product versions.

Even well designed third party studies are generally not utilized or cited by the production teams.

verification (1)


Verification 2

Over ocean, there tends to be remarkable consistency both in AOD and in correlated bias across sensors. Cloud masking is still a problem.

Over land, there is strong regional and temporally correlated biases across both algorithms and sensors, largely due to the lower boundary condition.

Radiance calibration is a significant problem and pops up in indirect ways. NASA is working on it.

Demonstration of diversity in aerosol products has little barring on relative product efficacy.

bottom line: The difference between “face value statistics” and an error bar for an individual retrieval is vast. How does this effect you? Depends on what you do with the data.

verification (2)


Key recommendations 1

Algorithms need better documentation. The ATBDs are a good start, but they need to be kept current and perhaps even expanded.

Better strategies for “level 3 products” need to be devised and supported. One size fits none…..

One size fits all verification does not work either. But, there is a total lack of agreement on key verification metrics. The USER community needs to agree on what they think is important.

It should be a programmatic requirement of the science teams to develop prognostic error models as part of any mass produced and distributed product. Program offices need to fund this.

key recommendations (1)


Key recommendations 2

AERONET and MPL-net are clearly backbone networks for verification and we strongly endorse their financial support as a critical community resource. Similarly, targeted aircraft observations should be encouraged.

Developers and outside entities to work more together in verification studies.

Field work needs to be better utilized. After a first round of verification studies, next generation algorithms do not typically make use of older studies. Field work should focus more on verifying higher level products.

key recommendations (2)


  • Login