1 / 33

NACP Site Synthesis: Carbon Flux Model Comparison

This study compares measurement and modeling estimates of carbon fluxes to assess consistency and identify factors affecting performance. It includes 58 flux tower sites and 29 models, with observed fluxes, uncertainty, and ancillary data. The results reveal variation in model performance and highlight areas for future work.

bvogt
Download Presentation

NACP Site Synthesis: Carbon Flux Model Comparison

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The North American Carbon Program Site-level Interim Synthesis Model Data Comparison (NACP Site Synthesis) Daniel Ricciuto, Peter Thornton, Kevin Schaefer, Kenneth Davis Flux Tower PIs Modeling Teams NACP Site Synthesis Team

  2. Site Synthesis Objectives Activity initiated in 2008 by NACP to answer: - Are the various measurement and modeling estimates of carbon fluxes consistent with each other - and if not, why? • Quantify model and observation uncertainty • 58 flux tower sites; 29 models • Gap-filled observed weather • Observed fluxes, uncertainty, ancillary data • Link model performance to model structure • Which model characteristics associated with “best” models? • How does this performance vary among sites?

  3. Flux Tower Sites AmeriFlux sites over 35 sites Data provided by CDIAC Standardized “Level 2” format Canadian sites Over 15 sites Data provided by - La Thuile synthesis activity - FLUXNET Canada Site selection based on: Representativeness of biomes Length of record Quality of data - gap fraction Ancillary data availability Meteorological drivers and flux observations gap-filled by NACP synthesis team

  4. Models Results submitted from 22 models to date On average 10 simulations per site Total of over 1000 simulated site years

  5. Analysis Projects

  6. Selected resultsObserved flux uncertainty (Barr et al.) • NEE: random, U* filtering, gap-filling • GPP & Re: random, U* filtering, gap-filling, partitioning Random Uncertainty U* Threshold Uncertainty

  7. Selected resultsOverall model performance (Schwalm et al.) Based on monthly model-data differences Large spread among models, sites Perfect Model Taylor Skill Normalized Mean Absolute Error Chi-squared

  8. Taylor Skill by Model Characteristics (Schwalm et al.[2010])

  9. Spectral NEE Error (Dietze et al.) Largest errors associated with diurnal and annual cycles Large variation in performance at synoptic scales Annual Diurnal Noise level based on NEE observation uncertainty

  10. Phenology (Richardson et al.) Harvard Forest Leafout too early Senescence too late Errors of 25-50 days based on NEE Errors in GPP/NEE correlated with LAI in spring, but not autumn

  11. Future work • Objectives for new simulations • Non steady-state • Previous simulations assumed steady state, not consistent with observed fluxes • Incorporate known information about disturbance history • Under-analyzed biomes • e.g. wetland, tundra • Model sensitivity analyses • Good idea of inter-model uncertainty, but intra-model uncertainty? • What are the key parameters? • Recruit more modeling teams • Invite wetland modeling teams • Expand number of IPCC GCMs • Coordinate with other syntheses • LBA DMIP • NACP regional interim synthesis, MsTIMIP • Make our database more visible, user-friendly • 29 potential analysis teams making use of interim synthesis dataset • Long-term, dynamic dataset • Coordinate with CDIAC, La Thuile, ESG, other activities

  12. Summary • Highly collaborative effort, made possible by • Efforts (largely unfunded) of model and tower investigators • Bringing together data, model and observation communities • A productive series of workshops discussing protocol, analysis • Standardized inputs and flux observations • Coordination by NACP team, CDIAC, FLUXNET to determine and collect necessary ancillary data for models not already available • Valuable dataset for model developers • First formal estimates of observation uncertainty in a standard dataset • Testbed for regional/global models to validate against a large observation network • Opportunity for model, observation PIs to learn from each other

  13. Additional Slides

  14. Missing Affiliations Missing Model Affiliations Missing Site Affiliations

  15. Lessons Learned • Baseline parameter vs. structure • Std vs. CADM parameter runs • Better way to process submission files • Better IC criteria and data • Need so many sites? • Focus on what we do not have • Not random missing sites: which are missing? • NSS vs SS runs • Coordinate model needs with Site data collections • Better detail site info/ancillary data (tree bands, resp chambers) • Mike Dietze leaf level photosynthesis • Need support for background/CADM data • Weeks per CADM file • Central lab model e.g., for leaf N • Encourage repository for data, esp ancillary data

  16. Lessons learned • Chance to improve model (not tuning, use CADM) • Clarify protocol not “out of box” • Need better phenology obs

  17. New site • Bondville • Not much anc data • Permafrost • Daring Lake, Toolik Lake, other Canadian sites • 8-mile lake (schuur) • Chronosequence sites (priority 3 UCI) • Augment under rep biomes • Grassland, savanna, shrubland, wetlands

  18. Next round • Objectives • Non-SS • Under-analyzed biomes • Sensitivity analyses • Survey existing analyses • OAT few sites survey param • Recruit Model teams • Invite wetland model teams • IPCC GCMs • Coordinate with LBA DMIP • LULC input to models (Peter T.) • Weather (Dan R.) • Support • Money to model teams, proposal to CCIWG • Postdoc to coordinate

  19. Improving Infrastructure • Model submission tool (alma_var) • Standard model processing (Dan Ricciuto) • Tool to Process Barr et al. uncertainty files • Manpower (Barbara Jackson) • Consistency across products • Update Wiki and FTP

  20. Inter-annual (Raczka et al.) Annual total NEE at US-Ha1

  21. NEE Seasonal Cycle (Schwalm et al.) Forest sites better than non-forest Ag models do best at Ag sites Mean (P) and optimized model (N) do well Taylor Plot: All Sites

  22. NEE Error by Time Scale (Dietze et al.)

  23. GPP All Sites (Schaefer et al.) Mean is best Optimized Top 3 models for NEE Unit Problems?

  24. GPP Bias and Phenology CA-Ca1 US-Ne3 Bias (mmol m-2 s-1)

  25. What does all this mean? Model performance varies with structure Peak NEE error at 1 day and 1 year period Bias & phenology dominate GPP error GPP error large source of NEE error Must link model structure with performance

  26. Flux Tower Sites

  27. Disturbance Uncertainty ORCHIDEE at 1850 burn site, Manitoba

  28. NEE Seasonal Cycle CA-Ca1 US-UMB CA-Mer Best Typical Worst

  29. GPP Seasonal Cycle CA-Ca1 CA-Mer US-Ne3 Best Typical Worst

  30. NEE Diurnal Cycle CA-Ca1 CA-Obs US-Ha1 Best Typical Worst

  31. GPP Diurnal Cycle CA-Ca1 CA-Obs CA-Oas Best Typical Worst

  32. Uncertainty at Diurnal Time Scale Mead rain-fed corn-soy rotation site (Nebraska) Soybean Year Corn Year

  33. Observed Flux Uncertainty (Based on Richardson et al., 2006, Ag. For. Met. 136:1-18)

More Related