1 / 28

Design and Validation of Observing System Simulation Experiments ( OSSEs )

Design and Validation of Observing System Simulation Experiments ( OSSEs ) . Ronald Errico Goddard Earth Sciences Technology and Research Center at Morgan State University and

achilles
Download Presentation

Design and Validation of Observing System Simulation Experiments ( OSSEs )

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Design and Validation of Observing System Simulation Experiments (OSSEs) Ronald Errico Goddard Earth Sciences Technology and Research Center at Morgan State University and Global Modeling and Assimilation Office at NASA Goddard Space Flight Center

  2. Outline: General Methodology Simulation of observations and their errors Validation of an OSSE: Assimilation metrics Validation of an OSSE: Forecast metrics Summary

  3. Data Assimilation of Real Data Real Evolving Atmosphere, with imperfect observations. Truth unknown Analysis Analysis Analysis Time Analysis Analysis Analysis Climate simulation, with simulated imperfect “observations.” Truth known. Observing System Simulation Experiment

  4. Applications of OSSEs • Estimate effects of proposed instruments (and their competing designs)on analysis skill by exploiting simulated environment. • 2. Evaluate present and proposed techniques for data assimilation by exploiting known truth.

  5. Requirements for an OSSE • A simulation of “truth”, generally produced as a free-running forecast produced by an NWP model (termed the “Nature Run”). • A simulation of a complete set of observations, drawn from truth. • A simulation of observational instrument plus representativeness • errors to be added to the results of step 2. • A data assimilation system to ingest the results of step 3. • All these steps must be done well enough to produce believable and • useful metrics from step 4.

  6. Validation of OSSEs As for any simulation, OSSE results apply to the assimilation of real data only to the degree the OSSE for such an application validates well with regard to relevant metrics. OSSE validity is first determined by carefully comparing a variety of statistics that can be computed in both the real assimilation and OSSE contexts.

  7. Data assimilation is a fundamentally statistical problem Generally, averaging over long periods is required to obtain statistical significance. Only limited information about DAS behavior can be obtained from case studies. As assimilation systems improve and the number of observations used increases, the effects of any small subset of observations should decrease, and thus even longer averaging periods should be required to achieve statistical significance.

  8. Choice of a Nature Run • A good simulation of nature in all important aspects • Ideally, individual realizations of the NR should be • indistinguishable from corresponding realizations of • nature (e.g., analyses) at the same time of year. • Since a state-of-the-art OSSE will require a cycling DAS, • the NR should have temporal consistency. • For either 4DVAR or FGAT 3DVAR, NR datasets • should have high frequency (i.e., < 6 hours) • Since dynamic balance is an important aspect of the • atmosphere affecting a DAS, the NR datasets should • have realistic balances. • 6. For these and other reasons, using a state-of-the-art NWP • model having a demonstrated good climatology to produce • NR data sets is arguably the best choice.

  9. ECMWF Nature Run • Free-running “forecast” • Operational model from 2006 • T511L91 reduced linear Gaussian grid (approx 35km) • 10 May 2005 through 31 May 2006 • 3 hourly output • SST and sea ice cover is real analysis for that period GMAONature Run A 2012 model A 10 km global grid Hourly output Aerosol and trace gases included Available in 2013

  10. Simulation of Observations • 1. Any observation that can be assimilated can be simulated! • 2. Differences between the H used to assimilate and simulate • will be interpreted by the DAS as a representativeness error • 3. Therefore, as more realism is modeled for the observation • simulation compared to the assimilation, more • representativeness error is introduced, including gross errors • that must be identified by the DAS quality control. • It may be necessary to compensate for deficiencies in the nature • run (e.g., unrealistic cloud cover) when simulating observations.

  11. Dependency of results on errors Consider Kalman filter applied to linear model Daley and Menard 1993 MWR

  12. Characteristics of Real Observation Errors • Generally unknown • Even statistics not well known • Often biased • Correlated (maybe even with background error) • Include gross errors • Generally non-Gaussian • (a result of 5 or some basic physics; e.g. nonlinearity)

  13. Simulation of Observation Errors • When simulating observations, there is no instrument and • thus no instrument error. It therefore needs to be explicitly • simulated and added, unless it is negligible. • 2. An error of representativeness is generally implicitly created: • The real implicit rep. error is likely underestimated by this • implicit component. It needs to also be simulated and added. • If real observation errors are correlated, the simulated ones • must also be if the OSSE is to verify.

  14. Assimilation System NCEP/GMAO GSI (3DVAR) scheme Resolution: approx. ½ degree, 72 levels Evaluation for 1-31 July 2005, with 2 week, accelerated spin up in June. Observations include all “conventional” observations available in 2005 (except for precipitation rates) and all radiance available from MSU, AMSU-A/B, HIRS-2/3, and AIRS instruments. Tuning parameters for simulated observation errors are error standard deviations, fractions of error variances for correlated components, vertical and horizontal correlation length scales Further details in Q.J.R.Met. Soc., 2012 Errico et al. and Privé et al.

  15. Locations of QC-accepted observations for AIRS channel 295 at 18 UTC 12 July Simulated Real

  16. Standard deviations of QC-accepted y-H(xb) values (Real vs. OSSE) AIRS AQUA AMSU-A NOAA-15 RAOB U Wind RAOB q

  17. Horizontal correlations of y-H(xb) AMSU-A NOAA-15 Chan 6 GLOB RAOB T 700 hPa NHET GOES-IR SATWND 300 hPa NHET GOES-IR SATWND 850 hPa NHET

  18. Square roots of zonal means of temporal variances of analysis increments T OSSE T Real U Real U OSSE

  19. OSSE Real T 850 hPa Time mean Analysis increments U 500 hPa

  20. OSSEvsReal Data: Forecasts Mean anomaly correlations

  21. U-Wind RMS error: July Solid lines: 24 hour RMS error vs analysis Dashed lines: 120 hr forecast RMS error vs analysis

  22. July Adjoint: dry error energy norm

  23. July Adjoint: dry error energy norm

  24. Fractional reduction of zonal means of temporal variances of analysis errors compared with background errors T u

  25. Warnings Past problems with some OSSEs • Some OSSEs use a very reduced set of as a control • Some OSSEs havevery limited validation • Some OSSEs are based on very limited “case studies” • Some OSSEs use unrealistic obs errors (e.g., no rep. error) • Some OSSEs use a deficient NR

  26. Warnings General criticisms of OSSEs • In OSSEs, the NR and DAS models are generally too alike, • therefore underestimating model error and yielding • overly-optimistic results. • When future specific components of the observing systems • are deployed, the system in general will be different as will • the DAS techniques, and therefore the specific OSSE results • will not apply. • 3. OSSEs are just bad science!

  27. Response to Warnings • Design OSSEsthoughtfully • Consider implications of all assumptions • Validate OSSEs carefully. • Specify reasonable observation error statistics. • Avoid conflicts of interest. • Avoid over-selling results. • Only attempt to answer appropriate questions • Be skeptical of others’ works • Be critical of your own work (This is science!)

More Related