1 / 49

Continuation of Lecture 1

Continuation of Lecture 1. Corrections/Clarifications from Day 1. 1. Lin-Rood scheme stability comes about because it is equivalent to a Semi-Lagrangian scheme not using implicit differencing. (Thanks to Ravi for pointing this out).

edric
Download Presentation

Continuation of Lecture 1

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Continuation of Lecture 1

  2. Corrections/Clarifications from Day 1 1. Lin-Rood scheme stability comes about because it is equivalent to a Semi-Lagrangian scheme not using implicit differencing. (Thanks to Ravi for pointing this out). 2. Definition: An anomaly is the deviation from some mean (frequently also called a climatology in meteorology/oceanography) 3. What do we mean by seasonal forecasts? a. Forecast for a three month average of field of interest (precipitation). For example, forecast for June-July-August (JJA) starting from May 1. 4. Limit of predictive skill for seasonal forecasts (Tropical Pacific SST) that I gave (order of six months) is based on skill level that is realized currently. Here a usual metric for skillful would be an anomaly correction of 0.6. Lorenz limit for weather predictability of two weeks is from perturbation experiments, i.e. how small differences in initial state grow in time. Currently realized forecast skill is (significantly) less than two weeks.

  3. Corrections/Clarifications from Day 1 (continued) 4. Continued: You can do same error growth type of calculation for monthly means of tropical Pacific SST indices (Nino 3). This has been done with “Intermedate” coupled models such as Cane-Zebiak model. “Potential” predictability limit from those sorts of equations is 36 months. Currently realized forecast skill is less than this. 5. Based on my comment that the atmospheric initial condition memory is gone in two weeks then what use is assimilation to longer term forecasting problem? a. Memory of the initial state in the ocean (large thermal inertia) is on the order of at least 6 months for fields of relevance to the seasonal forecasting problem (Near equatorial mass anomalies associated with equatorial Rossby and Kelvin waves). Therefore initial state specification in ocean is very important. The assimilation also serves another role in correcting the model bias, especially the structure of the thermocline.

  4. Corrections/Clarifications from Day 1 (continued) 5b. For the decadal forecasting problem different SST anomalies (modes of variability) are important. Initializing the ocean is important to capture these modes of variability. This is an area of active research. Next round of IPCC/CMIP will include extensive set of decadal forecasts by many international groups. 6. What do skill maps look like for current CGCM SST forecasts? Maps from CFS will be shown. 7. If you have questions in next few weeks or in the future please send me e-mail: daved@iri.columbia.edu. Please put in the e-mail Subject: TIFR Summer School.

  5. Seasonal Forecast Skill of SST 1. Forecast skill of is dependant on the initial condition (IC) month. In the central and eastern Pacfiic the SST forecast skill undergoes quick dcline in the Northern Hemisphere Spring which is consequently known as the Spring Predictability barrier. Examine SST forecast skill from NCEP CFS or 2 IC months: January and August. Deterministic skill score (ACC). Define a signal to noise ratio. Signal is ensemble mean standard deviation and noise is deviation of ensemble members around ensemble mean.

  6. CFS SST January IC (Anomaly Correlation) (W. Wang)

  7. CFS SST January IC (Anomaly Correlation) (W. Wang)

  8. CFS Signal to Noise Ratio for Jul IC (W. Wang)

  9. Decadal Forecasting 1. Two modes of variability that people hope to predict: Pacific Decadal Oscillation (PDO) Atlantic Meridional Oveturning Circulation (AMOC) 2. What do we mean by decadal variability? Interannual variability around the decadal variability? 3. What are challenges? Prescribing ocean initial state. Difficult do to Lack of data. 4. Decadal forecast data will be available for anyone to examine and come to your own conclusions. IPCC AR5.

  10. North Atlantic Temperature Atlantic Meridional Overturning Circulation (AMOC) Warm North Atlantic linked to … Drought More rain over Sahel and western India More strong hurricanes • Two important aspects: • Decadal-multidecadal fluctuations • Long-term trend (Courtesy: Joe Tribbia, NCAR)

  11. PDO refers mainly to N. Pacific sea surface temperatures (SSTs). Climate teleconnectionssimilar to those of ENSO Substantial interannualvariability Several processes at work PDV (PDO): Pacific Decadal Variability The principal “mode” in the Pacific POSITIVE Phase NEGATIVE Phase Precipitation Correlation Meehl & Hu, J. Climate, 2006 (Source: http://jisao.washington.edu/pdo)

  12. Decadal Prediction But there are challenges … • Initialization • Many different global reanalysis products, but • significant differences exist • Large inherent uncertainty in driving of AMO Atlantic Salinity Anomalies (upper 300 m) Tropics  Mid-Lat  (Courtesy: Joe Tribbia, NCAR)

  13. Decadal Prediction But there are challenges … • Initialization • Many different global reanalysis products, but • significant differences exist • Ocean observing not yet global or comprehensive Tropical Upper Ocean T Anomalies (Upper 300 m) Pacific  Indian  (Courtesy: Joe Tribbia, NCAR)

  14. Progress with imperatives (CLIVAR JSC31) Decadal variability and predictability Global number of temperature observations per month as a function of depth 1980-2006 • Some key questions • To what extent is decadal variability in the oceans and atmosphere predictable? • What are the mechanisms of variability? • Does oceanic variability have atmospheric consequences? • Do we have the proper tools to realize the predictability? • Need for (coupled) data assimilation systems to initialize models • Are models “good enough” to make skillful predictions? • Adequacy of climate observing system?

  15. Timescales of Variability in Observations Temperature Precipitation 25% 1% 13% 25% 62% 74% e.g. Climate Variability & Change in CO

  16. What value is there in interannual signal of long-term forecast? Suppose we have these decadal forecasts which will have interannual variability. The value of the interannual variability part of the forecast would not be forecasting for a specific year several years in the future. Value would be in characterizing the statistics of the variability over some multi-decadal period. 1. What is the probability of JFM rain that exceed some threshold? 2. What is the probability of increases in extreme events of heat, cold, rain? This type of information would be useful to infrastructure planning. Will we be able to do this? No one knows…but this is the type of information governments are now asking scientists to produce.

  17. Coordinated Decadal Prediction for AR5 Basic model runs: 1.1) 10 year integrations with initial dates towards the end of 1960, 1965, 1970, 1975, 1980, 1985, 1990, 1995 and 2000 and 2005 (see below). - Ensemble size of 3, optionally increased to O(10) - Ocean initial conditions should be in some way representative of the observed anomalies or full fields for the start date. - Land, sea-ice and atmosphere initial conditions left to the discretion of each group. 1.2) Extend integrations with initial dates near the end of 1960, 1980 and 2005 to 30 yrs. - Each start date to use a 3 member ensemble, optionally increased to O(10) - Ocean initial conditions represent the observed anomalies or full fields.

  18. Current and Historical State of the Ocean Observing System 1. SST: First satellite based products (global coverage) starting in 1982. So, only about 30 years of data. Clever people have constructed EOF (SVD) reconstruction techniques to go back earlier in time when only had ship data. How well do the two types of products compare for the common era? 2. Sub-surface: TAO/TRITON Array in Pacific (1990-present (sometimes))* RAMA: Indian Ocean ( PIRATA: Atlantic ARGO:Everywhere (starting around 2000 and filling in till now) • Why sometimes? (Wear and tear) Buoys in “cold tongue” are good surface for phytoplankton. Small fish eat the phytoplankton. Big fish eat the small fish. Fisherman come to catch the big fish and do all sorts of interesting things to the buoys. None of which are good.

  19. Consistency of Observed Data Sets

  20. O. Alves

  21. Why is TAO/TRITON Array Arranged that way 1. Buoys are expensive so want a minimal set that can do the job. 2. The delayed oscillator theory and importance of equatorial waves was used to establish the need for the array. 3. Ocean currents are meridionaly confined.

  22. Equatorial Pacific Temperature Anomaly TAO GODAS TAO climatology used • Note the differences between GODAS and TAO temperature are as large as 2-3C in the eastern equatorial Pacific near the thermocline since mid Jan 2010. • The large departures from observations might be related to the failure of the three eastern most equatorial buoys (http://tao.noaa.gov). 25

  23. TAO/TRITON Observing Status in July • TAO moorings had massive failures at 95W and 110W near the equator. http://tao.noaa.gov/tao/status/

  24. http://tao.noaa.gov/tao/status/ Equatorial Pacific Temperature • Equatorial temperature decreased at the surface and near the thermocline, probably forced by easterly wind anomalies in the central-eastern Pacific (slide 14). • Temperature differences between GODAS and TAO, GODAS and Coriolis, were above 1C near the thermocline east of 135W. • Those positive biases were consistent with warm biases in the control simulation in which observations were not included. • Therefore, TAO mooring data at 95W and 110W played critical roles in constraining model biases. http://www.coriolis.eu.org/cdc GODAS-TAO TAO Temp Anom GODAS-Coriolis

  25. The Future is Much Brighter for Ocean Observations ARGO array: Automatic buoys that dive to 1000 meters then back to surface. Periodically at the surface they transmit their data to satellite. Nearly global coverage of the worlds oceans. Measure temperature and salinity. Multi-national: Many countries bought buoys.

  26. A. Weigel

  27. Indian Ocean Temperature Salinity Pre- Argo Argo O. Alves The Centre for Australian Weather and Climate Research A partnership between CSIRO and the Bureau of Meteorology The Centre for Australian Weather and Climate Research A partnership between CSIRO and the Bureau of Meteorology

  28. Example of ODA Estimates of Observed State In data sparse regions (everywhere outside the Tropical Pacific) and for fields that are not observed solutions from ODA products can show substantial variability. This makes their use as a verification product for models more ambiguous.

  29. Comparison of 20 year climatology of January surface zonal current from 3 state of the art ODA systems: Rather large differences especially near the equator. Different Realities Simulated by ODA Systems

  30. Coupled Model Bias (Systematic Error) • SST errors are frequently same order of magnitude as the Seasonal variability (standard deviation). 2. SST errors are similar across models. 3. Equatorial SST variability has wrong spatial structure and amplitude. 4. Precipitation structure becomes distorted and too symmetric about the equator: “Double ITCZ Syndrome”

  31. Use of Multi-Model Ensembles (MME) in Seasonal Forecasting • Consists of different system each with its own ensemble members. • Has been found that taking more than one forecast system and combining with other systems produces more skillful forecasts. • Has also been shown (fewer comparisons) that using the same number of ensemble members from the best model as for the MME still has MME with higher skill scores. • Most MME systems are equal weight, i.e. no difference in weights for good versus bad models. Reason for this is that with 30 years of data and cross validated weights it is hard to beat equal weights.

  32. RPSS: temp 1-tier 2-tier

  33. RPSS: pcp 1-tier 2-tier

  34. MME SST is Generally Better Than Any Model

  35. ECHAM4.5-GMLcfsSST ECHAM4.5-EC3_SST ECHAM4.5-MOM3 (DC2) ECHAM4.5-MOM3 (AC1) NCEP CFS ECHAM4.5-caSST JAMSTEC SINTEX-F Intercomparison of GCM precipitation seasonal forecast skill Anomaly correlation June–Sept Seasonal total from May 1 (1982–08)

  36. Seasonal Forecasting: Marginal Skill Problem Show skill scores for actual forecasts over the last 11 years from IRI. 2-tiered (uncoupled system). Order of 7 models most with about 10 ensemble members. Forecasts are terciles. Skill metrics used: • Ranked Probability Skill Score (RPSS) • Liklihood score. • Generalized Relative Operating Characteristics (ROC)

  37. Verification measure comparison: All-season temperature, 0.5-month lead time

  38. Verification measure comparison: All-season precipitation, 0.5-month lead time

  39. Use of Forecast System Including ODA to evaluate importance of observation networks Observing System Simulation Experiments (OSSE): Run ODA with full data stream and then cases removing select data types. Run forecasts from each of these cases and examine difference in forecasts, especially skill.

  40. Impact of Ocean Observations O. Alves

More Related