1 / 36

WFM 6311: Climate Change Risk Management

Akm Saiful Islam. WFM 6311: Climate Change Risk Management. Lecture-6: Approaches to Select GCM data. Institute of Water and Flood Management (IWFM) Bangladesh University of Engineering and Technology (BUET). December, 2009. Approaches for selecting a Global Climate Model for an Impact Study.

Download Presentation

WFM 6311: Climate Change Risk Management

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Akm Saiful Islam WFM 6311: Climate Change Risk Management Lecture-6: Approaches to Select GCM data Institute of Water and Flood Management (IWFM) Bangladesh University of Engineering and Technology (BUET) December, 2009

  2. Approaches for selecting a Global Climate Model for an Impact Study

  3. The IPCC has a guidance document of interest… IPCC-TGICA, 2007 “General Guidelines on the use of Scenario Data for Climate Impact and Adaptation Assessment” Version 2, June 2007 Prepared by T.R. Carter with contributions from other authors The Task Group on Data and Scenario Support for Impact and Climate Assessment (TGICA) of IPCC This PDF is provided on the CCCSN Training DVD

  4. From the Range of Projections… • IPCC recommends * the use of more than simply ONE model or scenario projection (one should use an ‘ensemble’ approach) – we saw why earlier • The use of a limited number of models or scenarios provides no information of the uncertainty involved in climate modelling • Alternatives to an ‘ensemble approach’ might involve the selection of models/scenario combinations which ‘bound’ the max/min of reasonable model projections (used in IJC Lake Ontario-St. Lawrence Regulatory Study) * (IPCC-TGICA, 2007)

  5. Two Tests for the selection of a Model: TEST 1: How well does a model reproduce the historical climate? Commonly called ‘Model Validation’ TEST 2: How does the model compare with all other models for future projections?

  6. First test: Baseline (historical) climate We can test how well a model has reproduced the historical baseline climate (Model VALIDATION) A model should be able to accurately reproduce past climate (baseline) as a criterion for further consideration Require reliable, long-term observed climate data from the location of interest OR we could use GRIDDED global datasets at the same scale as the models IMPORTANT: Remember we are comparing site-specific to a grid cell average, so an exact match is not to be expected.

  7. Second test: Future Projection We can check how a model performs in comparison with many others in a future projection 5 criteria outlined by IPCC: Consistency with other model projections Physical plausibility (realistic?) Applicability for use (correct variables? timescale?) Representative Accessibility of data A model should not be an outlier in the community of model results

  8. Check maps - CGCM3 - Temperature? OBS Stations NCEP GRIDDED CGCM3T47 1961-1990 Mean ANNUAL TEMPERATURE Reasonable pattern, with models slightly cold

  9. Example: CGCM3 – Timeseries in the Historical Period The model is too cold, but the TREND is good

  10. Check maps - CGCM3 - Precipitation? OBS Stations NCEP GRIDDED CGCM3T47 1961-1990 Mean ANNUAL PRECIPITATION Pattern not quite right –units here are mm/day

  11. Example: CGCM3 – Timeseries in the Historical Period The model is too wet,TREND is reasonable

  12. Test 1: Baseline Methodology: • Comparison of Annual, Seasonal, Monthly means over the same historical period • Use the variables of interest – most common – precipitation and temperature from the Archive • Keep in mind that we are comparing a single site location (meteorological station) against a gridded value • An improved method would be to include other nearby stations in the analysis as well with long records • We then obtain from CCCSN the model baseline values for the same location using the SCATTERPLOT

  13. Test 1: (continued) • Compare the annual values and the distribution of temperature over the year • Models which best match the annual mean and the monthly distribution pattern can be identified • NOTE: it doesn’t matter which emission scenario we select since for the historical period, the models use the same baseline

  14. Test 1: Baseline Methodology… observed means too warm too wet too cold too dry Annual Temperature Annual Precipitation

  15. Test 1: Baseline Methodology…Looking at Temp and Precip together • Again, SCATTERPLOT on CCCSN – simply select BOTH variables at the same time and all models or combine the 2 initial results in a single spreadsheet ‘Perfect’ model • Almost all models are too wet • Most models are too cold • Outliers can be identified

  16. Test 1: Baseline Methodology… Rank the models for the baseline period - ANNUAL Temperature Precipitation Total Score Sum of Model A ranks Sum of Model B ranks Sum of Model C ranks Sum of Model D ranks Sum of Model E ranks Sum of Model F ranks … Model A rank Model B rank Model C rank Model D rank Model E rank Model F rank … Model A rank Model B rank Model C rank Model D rank Model E rank Model F rank … + Lowest Score Model is Closest to Baseline

  17. Test 1: Baseline Methodology • The same analysis can be done on a month and seasonal basis –this can be very important • This method is best used to reject models (models with largest scores) • We effectively remove from consideration those models with lowest agreement (largest scores) • The moderating effect of lakes, local elevation effects, lake-induced precip are all complicating factors

  18. Test 2: Future Projections • No complications like observed data! • We look at the range of model projections for the same location and see how they vary • Models with outlier projections (excessive anomalies – which are too large or too small) are best rejected • Finding the anomalies is a simple process using SCATTERPLOT on CCCSN

  19. Test 2: Future Projections The 1961-1990 or 1971-2000 period as baseline? Which projection period are we interested in? (2050s is a common period for planning purposes) Is an annual, seasonal or monthly projection needed? - depends on the study

  20. Annual Temperature/Precipitation Change Scatterplot for Toronto Grid Cell: 2050s (ONLY SRES) Median T and P for all models/scenarios 1 Std. Dev

  21. What do all the models and emission scenarios tell us for this gridcell? Median Annual Temperature Change in 2050s Toronto Pearson A Observed 1961-1990 Normal LOWER UPPER o o o +1.8 +2.6 +3.3 o 7.2 C Median Annual Precipitation Change in 2050s LOWER UPPER +0.4% +5.0% +9.7% 780.8mm

  22. TEST 2: Which Models are Closest to the Median Projection? Rank the models for the 2050s Projections - ANNUAL Temperature Precipitation Total Score Sum of Model A ranks Sum of Model B ranks Sum of Model C ranks Sum of Model D ranks Sum of Model E ranks Sum of Model F ranks … Model A rank Model B rank Model C rank Model D rank Model E rank Model F rank … Model A rank Model B rank Model C rank Model D rank Model E rank Model F rank … + Lowest Score Model is Closest to ALL MODEL MEDIAN

  23. Is there a ‘best’ model for both tests? TEST 1 TEST 2 (baseline) (projections) Resulting Models Resulting Models Best Models from both TESTS HADCM3 GISSAOM CGCM3T63

  24. The Caveats: • We have only considered ANNUAL values, not SEASONAL or MONTHLY baseline (TEST 1) or projections (TEST 2) • The seasonal and monthly options are available on the SCATTERPLOT selector) • ‘Extreme variables’ have greater uncertainty than normals Models can show good ANNUAL agreement with baseline and good agreement with all model projections, but they can still have incorrect seasonal or monthly distributions

  25. Will Regional Climate Model (RCM)s help? • They offer higher spatial resolution (~50 x 50 km) versus GCM at 200-300 km • The models are driven by an overlying model or gridded data source – so biases in those gridded datasets will also be included in the RCM • The time requirements and processing power available means there are fewer emission scenarios available = fewer future pathways for consideration • Some investigations will always require further statistical downscaling

  26. Will RCMs Help in TEST 1? CRCM3.7.1: 6.1 C CRCM4.1.1: 4.9 C CRCM4.2.2: 6.1 C CRCM3.7.1: 758.5mm too dry CRCM4.1.1: 542.8mm too dry CRCM4.2.2: 860.7mm too wet all cold too warm too wet too cold too dry Annual Temperature Annual Precipitation

  27. Will RCMs Help in TEST 2? crcm4.2.0 crcm4.1.1 Median T and P crcm3.7.1 1 Std. Dev

  28. Running Scatterplots for all parameters

  29. CCCSN.CA website Select Scenarios - Visualization

  30. Select Scatterplots

  31. Get data • Input lat long • Select AR4 • Select Variable Tmean • Select Model(s) validated to Tmean • Click Get Data

  32. Website Output Plus output table under chart

  33. Get data for all variables including climate extremes You can select an ensemble of models by using Ctrl-Enter

  34. Ensemble of CCCSN.CA Results for Ptotal at Windsor

  35. Climate Extremes available for some models

  36. Future Consecutive Dry Days at Windsor Using 3 GCM model output Can average all model results for ensemble

More Related