downscaling gcm ensembles n.
Skip this Video
Loading SlideShow in 5 Seconds..
Downscaling GCM ensembles PowerPoint Presentation
Download Presentation
Downscaling GCM ensembles

Loading in 2 Seconds...

play fullscreen
1 / 36

Downscaling GCM ensembles - PowerPoint PPT Presentation

  • Uploaded on

Downscaling GCM ensembles. PRECIS workshop, Exeter, UK, May 2014 Carol McSweeney. Explain how different types of climate models ensembles are used to explore different types of uncertainty

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about 'Downscaling GCM ensembles' - telyn

Download Now An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
downscaling gcm ensembles

Downscaling GCM ensembles

PRECIS workshop, Exeter, UK, May 2014

Carol McSweeney

aims of this session
Explain how different types of climate models ensembles are used to explore different types of uncertainty

Demonstrate how different GCM ensembles can be used with PRECIS to generate an ensemble of downscaled projections

Demonstrate sub-selection of ensemble members

Aims of this session
working with climate model ensembles
Working with climate model ensembles

Table of Contents

  • What is an ‘ensemble’?
  • 3 types of ensemble
  • Generating ensembles of high-resolution simulations with PRECIS
  • Sub-selecting ensemble members for downscaling with PRECIS – an example from Vietnam
what is an ensemble
What is an ‘Ensemble’

Ensemble = “all the parts of a thing taken together, so that each part is considered only in relation to the whole.”

Or, (in climate-modeling context)…

“ the results from several models, so that each single model is considered only in context of the results of all of the models”

3 main types:

  • Initial conditions ensemble
  • Multi-model ensemble
  • Perturbed-physics ensemble
gcm ensembles that can be downscaled with precis
GCM ensembles that can be downscaled with PRECIS
  • Initial conditions ensemble
    • 3 - member HadAM3P ensemble (projections under SRESA2)
    • Useful for capturing climate change signal in regions/variables with high variability on inter-annual or decadal timescales e.g. precipitation extremes.
  • Perturbed-Physics ensemble (‘QUMP’)
    • 17-member ensemble of HadCM3 (HadCM3 Q0-16) projections under SRES A1B
  • Multi-model ensemble (CMIP5)
      • New for PRECIS2.0 release!
      • Currently able to downscale HadGEM2-ES projections under RCP4.5/8.5 and ERA-Interim.
      • Capability to downscale additional CMIP5 GCMs coming soon.
1 initial conditions ensembles

Run 1, winter

Run 1, summer

Run 2, summer

Run 2, winter

Run 3, winter

Run 3, summer

-80 -40 -20 -10 -5 5 10 20 40 80

Change (%)

1.) Initial Conditions ensembles
  • Internal or natural variability
    • Increase sample size
    • Which changes are ‘signal’, and which changes are ‘noise’..?
    • I.e. which changes are reliable?
    • Internal variability greatest issue when we are looking for:
      • (a) Spatial or temporal details (e.g. extremes)
      • (b) Variables with strong multi-decadal variability
initial conditions ensembles

Mean, winter

Mean, summer

Top 5%, summer

Top 5%, winter

Top 1%, winter

Top 1%, summer

-10 -5 -2 -1 0 1 2 5 10

Initial conditions Ensembles
  • Coloured areas where signal is discernable from noise, and changes are ‘reliable’.
  • White areas where signal is not clear, and changes are ‘unreliable’.
  • Can discern significant changes over much of Europe in winter and parts of Europe in summer, but signal is still unclear in many areas, particularly in extremes.

Kendon, E. J., D. P. Rowell, R. G. Jones, and E. Buonomo, 2008: Robustness of future changes in local precipitation extremes. J. Climate, doi: 10.1175/2008JCLI2082.1.

2 multi model ensembles
2.) Multi-model ensembles
  • CMIP3/5 ensembles: modelling centres submitted results from equivalent simulations to allow inter-model comparisons (results summarised in IPCC AR4/5 reports)
  • Models from different modelling centres around the world use different structural choices in model formulation → different future climate projections:
    • ‘Structural uncertainties’
  • ‘Disagreements’ between models can be large at the regional scale due to differences in spatial patterns of change e.g. regional mean precip change…
rainfall change ipcc ar4
Rainfall change: IPCC AR4
  • Combination of pattern and some sign differences lead to lack of consensus
3 perturbed physics ensembles
3.) Perturbed-Physics Ensembles
  • An alternative route to exploring GCM uncertainty, method underpins UKCP09 methodology
  • Many processes in GCMs are ‘parameterised’
    • Parameterisations represent sub-gridscale processes
    • Values of parameters are unobservable and uncertain
    • Explore model uncertainty by varying the values of the parameters in one model

Climate Model Uncertainties

Structural Uncertainty

(IPCC CMIP3 multi-model ensemble)

Parameter Uncertainty

(QUMP perturbed physics ensemble)

the qump 17 member perturbed physics ensemble
The QUMP 17-member perturbed physics ensemble
  • 17 members (HadCM3 Q0..Q16)
    • ‘parameter space’ = range/combinations of plausible values of parameters
      • Range of plausible values of parameters determined according to expert opinion
      • 17 models sample ‘parameter space’ systematically

Quantifying Uncertainty in Model Projections

rainfall change hadley qump
Rainfall change: Hadley QUMP
  • Again significant range of different projected changes
  • Similar range and behaviour to IPCC models?
multi model mme vs perturbed physics ensembles ppe
Multi-model (MME) Vs Perturbed Physics Ensembles (PPE)

PPE Strengths:

  • Can control the experimental design – Systematic sampling of modelling uncertainties
  • Wider range of physically plausible climate outcomes
  • Potentially include 1000s of members
    • Can design ensemble so that results can be interpreted probabilistically (e.g. UKCP09)

PPE Weaknesses:

  • Doesn’t sample structural uncertainty (i.e. between models from different centres)
downscaling from large gcm ensembles
Downscaling from large GCM ensembles
  • Significant uncertainties in the simulated broad-scale climate changes come from GCMs
  • Downscaling all members of a GCM ensemble (17 members of QUMP or 38 members of CMIP5) is not feasible!
  • How do we choose which to use??

I can only downscale one GCM. Shall I choose a mid-range projection?

I might choose 4 members that span the widest possible range of mean precipitation change for the GCM grid-box that my city lies in.

I will choose the ‘best’ 5 models according to their validation. I could use the models with the lowest RMSE for temperature and precipitation.

Maybe I will choose ensemble members to span the range of global sensitivities?

generating useful projections with precis
Generating ‘useful’ projections with PRECIS
  • Our recommended approach is to sample from the available GCMs
    • Capture a representative range of future climate outcomes by looking at the GCM projections
    • Set up a feasible/manageable downscaling experiment
    • Provides a set of future projections which broadly represent the uncertainties in future climate projections, consistent with IPCC AR5 global projections
  • Strategic sampling is key!
    • An ‘ad-hoc’ sample GCMs tells us little about uncertainty!
selecting a subset of the qump or cmip5 ensembles
Selecting a subset of the ‘QUMP’ or CMIP5 ensembles

Choose members that:

  • sample the range of future outcomes well
  • perform reasonably well when compared with observational data
  • Both can be analysed using the GCM monthly mean fields available from BADC
example of model sub selection vietnam
Example of Model sub-selection: Vietnam

Criteria for selection

  • Validation
    • Selected models should represent Asian summer monsoon (position, timing, magnitude), and associated rainfall well, as this is key process
  • Future
    • Magnitude of response: greatest/least regional/local warming, greatest/least magnitude of change in precipitation
    • Characteristics of response
      • Direction of change in wet-season precipitation (increases and decreases)
      • Spatial patterns of precipitation response over south-east Asia
      • Response of the monsoon circulation
validation monsoon onset
Validation:Monsoon Onset
  • Monsoon flow has some systematic error – a little too high, but timing (and position) of features is very good.
  • All do a reasonably good job at simulating rainfall in the region
  • Those that best represent the characteristics of the monsoonal flow don’t necessarily also best represent the local rainfall…
  • No reason to eliminate any models on grounds of validation
recommended qump members for this region
Recommended QUMP members for this region
  • HadCM3Q0 – The standard model
  • HadCM3Q3 – A model with low sensitivity (smaller temperature changes)
  • HadCM3Q13 – A model with high sensitivity (larger temperature changes)
  • HadCM3Q10 – A model that gives the driest projections
  • HadCM3Q11 – A model that gives the wettest projections
  • Including Q10 and Q13 means that we also cover models which characterise the different spatial patterns of rainfall change, and different monsoon responses.
should information about model performance be used in sub selection


  • At the regional scale, it can be easier to identify models that are clearly worse than others
  • We have to sub-select (downscaling all models not an option)
  • Including a ‘worst’ model in a sub-set gives it a higher relative weight (i.e. is 1/5 rather than 1/30)
Should information about model performance be used in sub-selection?

Can we link aspects of poor performance directly to a model’s ability to simulate plausible future climates??

(See guidance by Knutti et al, 2010)

  • ‘Horses for courses’ – all models have something wrong!
  • Infinite number of potential metrics/thresholds
  • Circular reasoning – models calibrated with the same data that they are evaluated against (i.e. tuning leads to convergence)
  • Encourages ‘climate model beauty contests’
process for requesting qump boundary data from the hadley centre
Process for requesting ‘QUMP’ boundary data from the Hadley Centre

1.) Email us to let us know that you are interested in using the QUMP ensemble with PRECIS

2.) Download the mean GCM fields from BADC Use these fields to choose a model subset which validates well, and spans a wide range of future outcomes

3.) Email us a 1-2 page summary of your analysis of the GCM fields, and your selected ensemble members

4.) If we agree that your selection is based on good criteria, we’ll then send your boundary data, and you can begin your runs

  • Using an ensemble of models gives us an indication of how large model uncertainties are
  • Different kinds of ensemble tell us about different sources of uncertainty
  • We can use ensemble(s), or sub-sets of ensembles, to try to capture as wide a range of plausible outcomes as we can for use in regional climate change assessments or impacts studies
  • Strategic sampling approaches allow us to design manageable downscaling experiments and provide meaningful indications of uncertainty
questions and answers

Questions and answers

Key References: McSweeney et al (Submitted to Climate Dynamics) Sub-selecting CMIP5 GCMs for downscaling over multiple regions.McSweeney et al (2012) Selecting ensemble members to provide regional climate change information, Journal of Climate 25(20) pp. 7100-7121

ppe vs mme example changes in sahelian rainfall from ben booth mohc
PPE vs MME: Example: Changes in Sahelian Rainfall(from Ben Booth, MOHC)

Kerry, Nature, 2008

In some cases, a PPE might not capture the full spread of the CMIP3 ensemble so we would need to consider other models too.

In other regions, CMIP3 may not capture the full spread of the PPE.

interpreting ensemble results
Interpreting Ensemble results

Ensemble mean

  • Gives the signal that is common to most models…’best guess’
  • Is this really the most likely/accurate…?
  • Averages out signal that is not common to all models

Can lose important information

  • Averages out variability

Ensemble spread (minimum, maximum)

  • But, to what extent does the degree of model spread reflect certainty? Be wary of model consensus….!
  • Conversely, Very large model spread can imply no information

And sometimes…

Ensemble distribution - estimate probabilities of outcomes

  • But only if we have a very large ensemble, and some advanced statistical techniques!
  • Storm track density 5-25˚N